Test Report: Hyperkit_macOS 19648

                    
                      584241d6059a856bd6609ebe9456581adc627cea:2024-09-17:36253
                    
                

Test fail (20/219)

x
+
TestOffline (195.38s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-246000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-246000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 80 (3m9.949212921s)

                                                
                                                
-- stdout --
	* [offline-docker-246000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "offline-docker-246000" primary control-plane node in "offline-docker-246000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "offline-docker-246000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:38:14.699462    6372 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:38:14.699861    6372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:38:14.699867    6372 out.go:358] Setting ErrFile to fd 2...
	I0917 02:38:14.699871    6372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:38:14.700075    6372 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:38:14.701887    6372 out.go:352] Setting JSON to false
	I0917 02:38:14.726516    6372 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4064,"bootTime":1726561830,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 02:38:14.726615    6372 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:38:14.785620    6372 out.go:177] * [offline-docker-246000] minikube v1.34.0 on Darwin 14.6.1
	I0917 02:38:14.844938    6372 notify.go:220] Checking for updates...
	I0917 02:38:14.868926    6372 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:38:14.890060    6372 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:38:14.912029    6372 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 02:38:14.939111    6372 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:38:14.958766    6372 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:38:14.979927    6372 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:38:15.001212    6372 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:38:15.029707    6372 out.go:177] * Using the hyperkit driver based on user configuration
	I0917 02:38:15.071239    6372 start.go:297] selected driver: hyperkit
	I0917 02:38:15.071267    6372 start.go:901] validating driver "hyperkit" against <nil>
	I0917 02:38:15.071285    6372 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:38:15.075841    6372 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:38:15.075983    6372 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19648-1025/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 02:38:15.084223    6372 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 02:38:15.087902    6372 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:38:15.087920    6372 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 02:38:15.087951    6372 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:38:15.088232    6372 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:38:15.088273    6372 cni.go:84] Creating CNI manager for ""
	I0917 02:38:15.088307    6372 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:38:15.088313    6372 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 02:38:15.088387    6372 start.go:340] cluster config:
	{Name:offline-docker-246000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:38:15.088472    6372 iso.go:125] acquiring lock: {Name:mkc407d80a0f8e78f0e24d63c464bb315c80139b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:38:15.155949    6372 out.go:177] * Starting "offline-docker-246000" primary control-plane node in "offline-docker-246000" cluster
	I0917 02:38:15.197905    6372 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:38:15.197938    6372 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 02:38:15.197952    6372 cache.go:56] Caching tarball of preloaded images
	I0917 02:38:15.198069    6372 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:38:15.198077    6372 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:38:15.198354    6372 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/offline-docker-246000/config.json ...
	I0917 02:38:15.198373    6372 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/offline-docker-246000/config.json: {Name:mk6d3e11162b4334f799d782cb2684c50a376169 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:38:15.198726    6372 start.go:360] acquireMachinesLock for offline-docker-246000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:38:15.198795    6372 start.go:364] duration metric: took 51.861µs to acquireMachinesLock for "offline-docker-246000"
	I0917 02:38:15.198818    6372 start.go:93] Provisioning new machine with config: &{Name:offline-docker-246000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:38:15.198855    6372 start.go:125] createHost starting for "" (driver="hyperkit")
	I0917 02:38:15.219719    6372 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 02:38:15.219874    6372 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:38:15.219913    6372 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:38:15.228955    6372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54028
	I0917 02:38:15.229302    6372 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:38:15.229704    6372 main.go:141] libmachine: Using API Version  1
	I0917 02:38:15.229714    6372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:38:15.229996    6372 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:38:15.230121    6372 main.go:141] libmachine: (offline-docker-246000) Calling .GetMachineName
	I0917 02:38:15.230215    6372 main.go:141] libmachine: (offline-docker-246000) Calling .DriverName
	I0917 02:38:15.230328    6372 start.go:159] libmachine.API.Create for "offline-docker-246000" (driver="hyperkit")
	I0917 02:38:15.230354    6372 client.go:168] LocalClient.Create starting
	I0917 02:38:15.230385    6372 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem
	I0917 02:38:15.230436    6372 main.go:141] libmachine: Decoding PEM data...
	I0917 02:38:15.230450    6372 main.go:141] libmachine: Parsing certificate...
	I0917 02:38:15.230525    6372 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem
	I0917 02:38:15.230571    6372 main.go:141] libmachine: Decoding PEM data...
	I0917 02:38:15.230583    6372 main.go:141] libmachine: Parsing certificate...
	I0917 02:38:15.230603    6372 main.go:141] libmachine: Running pre-create checks...
	I0917 02:38:15.230611    6372 main.go:141] libmachine: (offline-docker-246000) Calling .PreCreateCheck
	I0917 02:38:15.230703    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:15.230872    6372 main.go:141] libmachine: (offline-docker-246000) Calling .GetConfigRaw
	I0917 02:38:15.231331    6372 main.go:141] libmachine: Creating machine...
	I0917 02:38:15.231339    6372 main.go:141] libmachine: (offline-docker-246000) Calling .Create
	I0917 02:38:15.231411    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:15.231531    6372 main.go:141] libmachine: (offline-docker-246000) DBG | I0917 02:38:15.231407    6393 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:38:15.231582    6372 main.go:141] libmachine: (offline-docker-246000) Downloading /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1025/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0917 02:38:15.716286    6372 main.go:141] libmachine: (offline-docker-246000) DBG | I0917 02:38:15.716191    6393 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/id_rsa...
	I0917 02:38:15.869857    6372 main.go:141] libmachine: (offline-docker-246000) DBG | I0917 02:38:15.869787    6393 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/offline-docker-246000.rawdisk...
	I0917 02:38:15.869896    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Writing magic tar header
	I0917 02:38:15.869915    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Writing SSH key tar header
	I0917 02:38:15.870287    6372 main.go:141] libmachine: (offline-docker-246000) DBG | I0917 02:38:15.870241    6393 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000 ...
	I0917 02:38:16.327174    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:16.327191    6372 main.go:141] libmachine: (offline-docker-246000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/hyperkit.pid
	I0917 02:38:16.327221    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Using UUID 2c83a60f-d277-4a1c-b51f-a7a8479ca9b3
	I0917 02:38:16.606445    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Generated MAC 7a:e9:d9:c4:c3:7e
	I0917 02:38:16.606481    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-246000
	I0917 02:38:16.606575    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:16 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c83a60f-d277-4a1c-b51f-a7a8479ca9b3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0917 02:38:16.606654    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:16 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c83a60f-d277-4a1c-b51f-a7a8479ca9b3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0917 02:38:16.606762    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:16 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2c83a60f-d277-4a1c-b51f-a7a8479ca9b3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/offline-docker-246000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/bzimage,
/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-246000"}
	I0917 02:38:16.606857    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:16 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2c83a60f-d277-4a1c-b51f-a7a8479ca9b3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/offline-docker-246000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machi
nes/offline-docker-246000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-246000"
	I0917 02:38:16.606882    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:16 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:38:16.610873    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:16 DEBUG: hyperkit: Pid is 6421
	I0917 02:38:16.611403    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 0
	I0917 02:38:16.611425    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:16.611506    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:16.612705    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:16.612934    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:16.612947    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:16.612973    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:16.612991    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:16.613009    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:16.613023    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:16.613036    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:16.613050    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:16.613062    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:16.613074    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:16.613088    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:16.613100    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:16.613114    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:16.613151    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:16.613187    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:16.613200    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:16.613217    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:16.613246    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:16.613256    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:16.618772    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:16 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:38:16.672128    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:16 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:38:16.691260    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:38:16.691293    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:38:16.691303    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:38:16.691312    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:38:17.070125    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:38:17.070139    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:38:17.184920    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:38:17.184940    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:38:17.184950    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:38:17.184963    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:38:17.185772    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:38:17.185783    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:38:18.613344    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 1
	I0917 02:38:18.613355    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:18.613449    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:18.614228    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:18.614288    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:18.614296    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:18.614315    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:18.614325    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:18.614333    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:18.614342    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:18.614348    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:18.614361    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:18.614372    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:18.614379    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:18.614391    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:18.614402    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:18.614411    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:18.614418    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:18.614425    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:18.614442    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:18.614458    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:18.614467    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:18.614475    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:20.615828    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 2
	I0917 02:38:20.615845    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:20.615891    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:20.616814    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:20.616876    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:20.616886    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:20.616897    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:20.616906    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:20.616914    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:20.616921    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:20.616927    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:20.616932    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:20.616941    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:20.616957    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:20.616969    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:20.616978    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:20.616986    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:20.616999    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:20.617008    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:20.617016    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:20.617022    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:20.617029    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:20.617046    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:22.593135    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:22 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 02:38:22.593289    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:22 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 02:38:22.593300    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:22 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 02:38:22.612753    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:38:22 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 02:38:22.617668    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 3
	I0917 02:38:22.617684    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:22.617790    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:22.618849    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:22.618909    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:22.618919    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:22.618927    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:22.618933    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:22.618939    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:22.618945    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:22.618955    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:22.618962    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:22.618969    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:22.618976    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:22.618991    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:22.619003    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:22.619014    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:22.619031    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:22.619044    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:22.619052    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:22.619059    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:22.619066    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:22.619071    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:24.621052    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 4
	I0917 02:38:24.621082    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:24.621170    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:24.621942    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:24.622000    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:24.622010    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:24.622024    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:24.622034    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:24.622042    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:24.622050    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:24.622056    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:24.622063    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:24.622069    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:24.622075    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:24.622090    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:24.622100    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:24.622109    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:24.622126    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:24.622183    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:24.622207    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:24.622214    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:24.622222    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:24.622230    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:26.622807    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 5
	I0917 02:38:26.622830    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:26.622850    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:26.623658    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:26.623730    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:26.623743    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:26.623753    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:26.623763    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:26.623775    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:26.623790    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:26.623800    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:26.623815    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:26.623828    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:26.623849    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:26.623863    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:26.623871    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:26.623880    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:26.623896    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:26.623921    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:26.623928    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:26.623943    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:26.623952    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:26.623962    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:28.625935    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 6
	I0917 02:38:28.625961    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:28.625979    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:28.626755    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:28.626810    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:28.626824    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:28.626833    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:28.626838    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:28.626848    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:28.626863    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:28.626876    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:28.626883    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:28.626904    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:28.626914    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:28.626923    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:28.626930    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:28.626942    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:28.626961    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:28.626974    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:28.626989    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:28.626998    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:28.627007    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:28.627014    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:30.628993    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 7
	I0917 02:38:30.629008    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:30.629092    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:30.629916    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:30.629981    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:30.629994    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:30.630002    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:30.630011    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:30.630047    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:30.630061    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:30.630070    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:30.630079    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:30.630090    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:30.630097    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:30.630116    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:30.630124    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:30.630131    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:30.630136    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:30.630148    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:30.630158    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:30.630166    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:30.630171    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:30.630189    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:32.631678    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 8
	I0917 02:38:32.631690    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:32.631753    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:32.632546    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:32.632593    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:32.632604    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:32.632613    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:32.632619    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:32.632626    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:32.632631    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:32.632639    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:32.632648    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:32.632656    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:32.632673    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:32.632682    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:32.632689    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:32.632694    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:32.632708    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:32.632720    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:32.632736    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:32.632748    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:32.632760    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:32.632769    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:34.633151    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 9
	I0917 02:38:34.633166    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:34.633237    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:34.634020    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:34.634064    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:34.634085    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:34.634097    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:34.634105    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:34.634110    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:34.634123    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:34.634132    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:34.634141    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:34.634149    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:34.634165    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:34.634187    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:34.634198    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:34.634205    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:34.634213    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:34.634223    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:34.634237    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:34.634251    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:34.634258    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:34.634266    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:36.634802    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 10
	I0917 02:38:36.634816    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:36.634875    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:36.635645    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:36.635702    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:36.635718    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:36.635736    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:36.635746    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:36.635753    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:36.635759    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:36.635765    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:36.635772    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:36.635782    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:36.635803    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:36.635809    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:36.635816    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:36.635824    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:36.635832    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:36.635840    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:36.635847    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:36.635854    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:36.635861    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:36.635868    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:38.636241    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 11
	I0917 02:38:38.636256    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:38.636344    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:38.637106    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:38.637160    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:38.637169    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:38.637177    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:38.637182    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:38.637201    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:38.637208    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:38.637214    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:38.637222    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:38.637228    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:38.637234    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:38.637250    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:38.637269    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:38.637278    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:38.637287    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:38.637295    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:38.637300    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:38.637322    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:38.637331    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:38.637339    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:40.638265    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 12
	I0917 02:38:40.638277    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:40.638342    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:40.639152    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:40.639191    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:40.639202    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:40.639211    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:40.639217    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:40.639227    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:40.639242    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:40.639253    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:40.639260    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:40.639268    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:40.639276    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:40.639284    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:40.639291    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:40.639306    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:40.639313    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:40.639338    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:40.639348    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:40.639355    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:40.639372    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:40.639384    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:42.641419    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 13
	I0917 02:38:42.641431    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:42.641483    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:42.642341    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:42.642384    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:42.642395    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:42.642421    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:42.642430    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:42.642443    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:42.642450    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:42.642458    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:42.642467    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:42.642483    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:42.642494    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:42.642504    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:42.642511    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:42.642517    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:42.642533    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:42.642540    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:42.642546    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:42.642552    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:42.642568    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:42.642584    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:44.643618    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 14
	I0917 02:38:44.643631    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:44.643697    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:44.644466    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:44.644539    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:44.644554    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:44.644568    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:44.644576    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:44.644585    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:44.644593    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:44.644599    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:44.644605    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:44.644611    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:44.644617    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:44.644623    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:44.644637    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:44.644648    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:44.644660    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:44.644667    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:44.644675    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:44.644683    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:44.644691    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:44.644699    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:46.645246    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 15
	I0917 02:38:46.645259    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:46.645327    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:46.646151    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:46.646192    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:46.646204    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:46.646227    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:46.646243    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:46.646269    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:46.646282    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:46.646290    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:46.646298    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:46.646304    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:46.646310    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:46.646317    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:46.646327    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:46.646337    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:46.646344    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:46.646351    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:46.646364    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:46.646381    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:46.646397    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:46.646409    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:48.646969    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 16
	I0917 02:38:48.646986    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:48.647046    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:48.647801    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:48.647848    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:48.647858    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:48.647865    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:48.647871    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:48.647889    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:48.647903    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:48.647920    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:48.647928    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:48.647946    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:48.647958    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:48.647966    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:48.647978    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:48.647986    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:48.648004    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:48.648012    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:48.648021    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:48.648029    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:48.648037    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:48.648050    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:50.648556    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 17
	I0917 02:38:50.648571    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:50.648593    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:50.649503    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:50.649542    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:50.649552    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:50.649561    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:50.649571    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:50.649578    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:50.649584    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:50.649590    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:50.649598    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:50.649612    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:50.649627    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:50.649635    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:50.649643    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:50.649652    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:50.649660    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:50.649672    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:50.649681    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:50.649690    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:50.649696    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:50.649711    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:52.651733    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 18
	I0917 02:38:52.651746    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:52.651823    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:52.652616    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:52.652661    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:52.652672    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:52.652680    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:52.652689    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:52.652698    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:52.652708    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:52.652716    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:52.652732    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:52.652741    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:52.652762    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:52.652775    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:52.652783    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:52.652791    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:52.652798    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:52.652805    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:52.652811    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:52.652819    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:52.652832    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:52.652844    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:54.654212    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 19
	I0917 02:38:54.654226    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:54.654284    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:54.655014    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:54.655078    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:54.655092    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:54.655114    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:54.655121    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:54.655127    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:54.655137    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:54.655146    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:54.655155    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:54.655176    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:54.655184    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:54.655190    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:54.655208    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:54.655216    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:54.655226    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:54.655233    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:54.655248    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:54.655260    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:54.655268    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:54.655275    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:56.656054    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 20
	I0917 02:38:56.656066    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:56.656157    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:56.657045    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:56.657084    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:56.657096    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:56.657106    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:56.657123    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:56.657138    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:56.657155    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:56.657171    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:56.657182    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:56.657193    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:56.657201    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:56.657208    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:56.657217    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:56.657224    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:56.657229    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:56.657238    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:56.657248    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:56.657255    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:56.657260    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:56.657266    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:38:58.657710    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 21
	I0917 02:38:58.657724    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:38:58.657826    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:38:58.658595    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:38:58.658664    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:38:58.658677    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:38:58.658686    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:38:58.658715    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:38:58.658728    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:38:58.658738    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:38:58.658744    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:38:58.658754    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:38:58.658761    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:38:58.658771    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:38:58.658779    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:38:58.658786    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:38:58.658793    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:38:58.658799    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:38:58.658805    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:38:58.658813    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:38:58.658821    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:38:58.658827    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:38:58.658834    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:00.660167    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 22
	I0917 02:39:00.660185    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:00.660258    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:39:00.661034    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:39:00.661079    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:00.661086    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:00.661094    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:00.661099    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:00.661106    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:00.661115    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:00.661137    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:00.661150    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:00.661158    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:00.661168    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:00.661185    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:00.661193    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:00.661200    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:00.661207    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:00.661223    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:00.661235    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:00.661252    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:00.661260    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:00.661268    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:02.663301    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 23
	I0917 02:39:02.663316    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:02.663382    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:39:02.664193    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:39:02.664242    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:02.664256    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:02.664270    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:02.664281    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:02.664300    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:02.664311    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:02.664317    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:02.664323    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:02.664332    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:02.664339    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:02.664346    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:02.664353    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:02.664362    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:02.664368    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:02.664375    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:02.664384    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:02.664391    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:02.664403    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:02.664412    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:04.664685    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 24
	I0917 02:39:04.664697    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:04.664757    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:39:04.665635    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:39:04.665707    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:04.665720    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:04.665752    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:04.665764    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:04.665772    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:04.665779    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:04.665785    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:04.665790    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:04.665803    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:04.665811    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:04.665817    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:04.665825    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:04.665841    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:04.665854    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:04.665861    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:04.665869    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:04.665887    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:04.665900    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:04.665909    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:06.667919    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 25
	I0917 02:39:06.667934    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:06.667971    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:39:06.668869    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:39:06.668931    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:06.668954    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:06.668964    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:06.668972    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:06.668978    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:06.668995    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:06.669005    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:06.669012    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:06.669018    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:06.669025    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:06.669034    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:06.669041    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:06.669047    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:06.669053    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:06.669060    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:06.669068    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:06.669081    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:06.669093    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:06.669102    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:08.671151    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 26
	I0917 02:39:08.671166    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:08.671204    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:39:08.672029    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:39:08.672083    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:08.672096    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:08.672110    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:08.672121    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:08.672154    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:08.672168    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:08.672176    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:08.672185    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:08.672191    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:08.672200    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:08.672205    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:08.672214    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:08.672222    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:08.672228    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:08.672234    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:08.672255    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:08.672267    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:08.672277    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:08.672292    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:10.673321    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 27
	I0917 02:39:10.673335    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:10.673404    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:39:10.674185    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:39:10.674239    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:10.674248    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:10.674255    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:10.674263    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:10.674271    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:10.674278    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:10.674284    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:10.674292    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:10.674298    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:10.674313    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:10.674321    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:10.674329    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:10.674337    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:10.674351    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:10.674362    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:10.674377    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:10.674392    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:10.674401    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:10.674409    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:12.676291    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 28
	I0917 02:39:12.676303    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:12.676388    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:39:12.677188    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:39:12.677201    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:12.677214    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:12.677225    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:12.677255    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:12.677265    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:12.677272    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:12.677277    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:12.677288    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:12.677298    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:12.677304    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:12.677311    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:12.677325    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:12.677337    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:12.677345    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:12.677353    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:12.677360    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:12.677365    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:12.677376    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:12.677390    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:14.677950    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 29
	I0917 02:39:14.677975    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:14.678058    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:39:14.678884    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for 7a:e9:d9:c4:c3:7e in /var/db/dhcpd_leases ...
	I0917 02:39:14.678895    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:14.678901    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:14.678908    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:14.678913    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:14.678920    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:14.678926    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:14.678939    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:14.678951    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:14.678968    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:14.678989    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:14.679004    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:14.679016    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:14.679025    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:14.679030    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:14.679051    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:14.679063    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:14.679080    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:14.679089    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:14.679099    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:16.681150    6372 client.go:171] duration metric: took 1m1.450505925s to LocalClient.Create
	I0917 02:39:18.683016    6372 start.go:128] duration metric: took 1m3.483861426s to createHost
	I0917 02:39:18.683033    6372 start.go:83] releasing machines lock for "offline-docker-246000", held for 1m3.483942165s
	W0917 02:39:18.683048    6372 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:e9:d9:c4:c3:7e
	I0917 02:39:18.683384    6372 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:39:18.683408    6372 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:39:18.692556    6372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54064
	I0917 02:39:18.693091    6372 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:39:18.693542    6372 main.go:141] libmachine: Using API Version  1
	I0917 02:39:18.693559    6372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:39:18.693845    6372 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:39:18.694247    6372 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:39:18.694279    6372 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:39:18.702922    6372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54066
	I0917 02:39:18.703385    6372 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:39:18.703864    6372 main.go:141] libmachine: Using API Version  1
	I0917 02:39:18.703879    6372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:39:18.704152    6372 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:39:18.704267    6372 main.go:141] libmachine: (offline-docker-246000) Calling .GetState
	I0917 02:39:18.704368    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:18.704441    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:39:18.705455    6372 main.go:141] libmachine: (offline-docker-246000) Calling .DriverName
	I0917 02:39:18.726316    6372 out.go:177] * Deleting "offline-docker-246000" in hyperkit ...
	I0917 02:39:18.768118    6372 main.go:141] libmachine: (offline-docker-246000) Calling .Remove
	I0917 02:39:18.768255    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:18.768264    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:18.768336    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:39:18.769293    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:18.769345    6372 main.go:141] libmachine: (offline-docker-246000) DBG | waiting for graceful shutdown
	I0917 02:39:19.769797    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:19.769875    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:39:19.770817    6372 main.go:141] libmachine: (offline-docker-246000) DBG | waiting for graceful shutdown
	I0917 02:39:20.772804    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:20.772898    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:39:20.774536    6372 main.go:141] libmachine: (offline-docker-246000) DBG | waiting for graceful shutdown
	I0917 02:39:21.775253    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:21.775392    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:39:21.776026    6372 main.go:141] libmachine: (offline-docker-246000) DBG | waiting for graceful shutdown
	I0917 02:39:22.776167    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:22.776242    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:39:22.776988    6372 main.go:141] libmachine: (offline-docker-246000) DBG | waiting for graceful shutdown
	I0917 02:39:23.777625    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:23.777687    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6421
	I0917 02:39:23.778774    6372 main.go:141] libmachine: (offline-docker-246000) DBG | sending sigkill
	I0917 02:39:23.778782    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:23.788994    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:39:23 WARN : hyperkit: failed to read stderr: EOF
	I0917 02:39:23.789010    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:39:23 WARN : hyperkit: failed to read stdout: EOF
	W0917 02:39:23.804038    6372 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:e9:d9:c4:c3:7e
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:e9:d9:c4:c3:7e
	I0917 02:39:23.804056    6372 start.go:729] Will try again in 5 seconds ...
	I0917 02:39:28.806130    6372 start.go:360] acquireMachinesLock for offline-docker-246000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:40:21.521529    6372 start.go:364] duration metric: took 52.715127886s to acquireMachinesLock for "offline-docker-246000"
	I0917 02:40:21.521558    6372 start.go:93] Provisioning new machine with config: &{Name:offline-docker-246000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:40:21.521611    6372 start.go:125] createHost starting for "" (driver="hyperkit")
	I0917 02:40:21.542969    6372 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 02:40:21.543059    6372 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:40:21.543085    6372 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:40:21.551571    6372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54074
	I0917 02:40:21.551924    6372 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:40:21.552333    6372 main.go:141] libmachine: Using API Version  1
	I0917 02:40:21.552356    6372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:40:21.552572    6372 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:40:21.552688    6372 main.go:141] libmachine: (offline-docker-246000) Calling .GetMachineName
	I0917 02:40:21.552792    6372 main.go:141] libmachine: (offline-docker-246000) Calling .DriverName
	I0917 02:40:21.552922    6372 start.go:159] libmachine.API.Create for "offline-docker-246000" (driver="hyperkit")
	I0917 02:40:21.552956    6372 client.go:168] LocalClient.Create starting
	I0917 02:40:21.552983    6372 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem
	I0917 02:40:21.553034    6372 main.go:141] libmachine: Decoding PEM data...
	I0917 02:40:21.553045    6372 main.go:141] libmachine: Parsing certificate...
	I0917 02:40:21.553090    6372 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem
	I0917 02:40:21.553126    6372 main.go:141] libmachine: Decoding PEM data...
	I0917 02:40:21.553139    6372 main.go:141] libmachine: Parsing certificate...
	I0917 02:40:21.553151    6372 main.go:141] libmachine: Running pre-create checks...
	I0917 02:40:21.553157    6372 main.go:141] libmachine: (offline-docker-246000) Calling .PreCreateCheck
	I0917 02:40:21.553228    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:21.553298    6372 main.go:141] libmachine: (offline-docker-246000) Calling .GetConfigRaw
	I0917 02:40:21.584769    6372 main.go:141] libmachine: Creating machine...
	I0917 02:40:21.584778    6372 main.go:141] libmachine: (offline-docker-246000) Calling .Create
	I0917 02:40:21.584862    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:21.584988    6372 main.go:141] libmachine: (offline-docker-246000) DBG | I0917 02:40:21.584856    6583 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:40:21.585045    6372 main.go:141] libmachine: (offline-docker-246000) Downloading /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1025/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0917 02:40:21.788186    6372 main.go:141] libmachine: (offline-docker-246000) DBG | I0917 02:40:21.788088    6583 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/id_rsa...
	I0917 02:40:21.899799    6372 main.go:141] libmachine: (offline-docker-246000) DBG | I0917 02:40:21.899732    6583 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/offline-docker-246000.rawdisk...
	I0917 02:40:21.899814    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Writing magic tar header
	I0917 02:40:21.899822    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Writing SSH key tar header
	I0917 02:40:21.900382    6372 main.go:141] libmachine: (offline-docker-246000) DBG | I0917 02:40:21.900346    6583 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000 ...
	I0917 02:40:22.275420    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:22.275439    6372 main.go:141] libmachine: (offline-docker-246000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/hyperkit.pid
	I0917 02:40:22.275482    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Using UUID 24750b9f-f2f9-4ca2-a821-beeb81b2aa02
	I0917 02:40:22.301176    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Generated MAC ca:dd:c9:c8:18:82
	I0917 02:40:22.301202    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-246000
	I0917 02:40:22.301236    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"24750b9f-f2f9-4ca2-a821-beeb81b2aa02", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0917 02:40:22.301269    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"24750b9f-f2f9-4ca2-a821-beeb81b2aa02", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0917 02:40:22.301312    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "24750b9f-f2f9-4ca2-a821-beeb81b2aa02", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/offline-docker-246000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/bzimage,
/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-246000"}
	I0917 02:40:22.301367    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 24750b9f-f2f9-4ca2-a821-beeb81b2aa02 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/offline-docker-246000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machi
nes/offline-docker-246000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-246000"
	I0917 02:40:22.301380    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:40:22.304290    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 DEBUG: hyperkit: Pid is 6584
	I0917 02:40:22.304726    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 0
	I0917 02:40:22.304742    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:22.304808    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:22.305696    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:22.305765    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:22.305778    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:22.305804    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:22.305817    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:22.305828    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:22.305835    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:22.305842    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:22.305851    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:22.305861    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:22.305870    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:22.305888    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:22.305899    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:22.305909    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:22.305917    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:22.305929    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:22.305938    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:22.305946    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:22.305955    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:22.305968    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:22.312362    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:40:22.321505    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/offline-docker-246000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:40:22.322385    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:40:22.322409    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:40:22.322421    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:40:22.322430    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:40:22.700256    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:40:22.700272    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:40:22.815133    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:40:22.815148    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:40:22.815155    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:40:22.815166    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:40:22.815722    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:40:22.815730    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:40:24.307994    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 1
	I0917 02:40:24.308011    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:24.308066    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:24.308868    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:24.308906    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:24.308921    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:24.308928    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:24.308934    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:24.308941    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:24.308948    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:24.308955    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:24.308961    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:24.308967    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:24.308973    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:24.309007    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:24.309031    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:24.309046    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:24.309055    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:24.309062    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:24.309069    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:24.309078    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:24.309091    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:24.309100    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:26.309551    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 2
	I0917 02:40:26.309572    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:26.309652    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:26.310449    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:26.310499    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:26.310509    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:26.310535    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:26.310545    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:26.310557    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:26.310564    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:26.310570    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:26.310578    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:26.310585    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:26.310590    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:26.310596    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:26.310603    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:26.310608    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:26.310615    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:26.310621    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:26.310649    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:26.310660    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:26.310670    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:26.310687    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:28.206138    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 02:40:28.206283    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 02:40:28.206292    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 02:40:28.226306    6372 main.go:141] libmachine: (offline-docker-246000) DBG | 2024/09/17 02:40:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 02:40:28.311404    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 3
	I0917 02:40:28.311426    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:28.311569    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:28.312772    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:28.312869    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:28.312879    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:28.312897    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:28.312902    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:28.312908    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:28.312913    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:28.312919    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:28.312926    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:28.312940    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:28.312950    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:28.312956    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:28.312963    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:28.312980    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:28.312991    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:28.312999    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:28.313005    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:28.313011    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:28.313019    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:28.313027    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:30.313040    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 4
	I0917 02:40:30.313064    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:30.313160    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:30.313964    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:30.314023    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:30.314032    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:30.314041    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:30.314053    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:30.314069    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:30.314076    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:30.314082    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:30.314094    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:30.314103    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:30.314111    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:30.314118    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:30.314126    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:30.314131    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:30.314139    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:30.314149    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:30.314156    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:30.314164    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:30.314177    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:30.314189    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:32.316234    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 5
	I0917 02:40:32.316255    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:32.316330    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:32.317125    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:32.317153    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:32.317163    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:32.317170    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:32.317178    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:32.317184    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:32.317200    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:32.317220    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:32.317230    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:32.317237    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:32.317242    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:32.317255    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:32.317265    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:32.317271    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:32.317276    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:32.317290    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:32.317298    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:32.317305    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:32.317312    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:32.317329    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:34.319363    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 6
	I0917 02:40:34.319376    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:34.319434    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:34.320227    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:34.320276    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:34.320286    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:34.320305    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:34.320312    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:34.320326    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:34.320337    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:34.320358    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:34.320369    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:34.320376    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:34.320382    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:34.320402    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:34.320411    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:34.320420    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:34.320427    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:34.320434    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:34.320440    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:34.320446    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:34.320451    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:34.320457    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:36.321470    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 7
	I0917 02:40:36.321486    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:36.321580    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:36.322357    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:36.322405    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:36.322412    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:36.322423    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:36.322430    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:36.322466    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:36.322476    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:36.322497    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:36.322506    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:36.322516    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:36.322523    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:36.322544    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:36.322555    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:36.322563    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:36.322572    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:36.322579    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:36.322585    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:36.322598    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:36.322611    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:36.322621    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:38.323511    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 8
	I0917 02:40:38.323524    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:38.323592    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:38.324378    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:38.324421    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:38.324437    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:38.324459    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:38.324472    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:38.324486    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:38.324498    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:38.324511    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:38.324519    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:38.324525    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:38.324531    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:38.324546    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:38.324556    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:38.324563    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:38.324570    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:38.324578    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:38.324584    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:38.324591    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:38.324598    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:38.324614    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:40.326613    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 9
	I0917 02:40:40.326626    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:40.326666    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:40.327465    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:40.327490    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:40.327497    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:40.327524    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:40.327531    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:40.327541    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:40.327551    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:40.327558    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:40.327565    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:40.327572    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:40.327580    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:40.327588    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:40.327595    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:40.327601    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:40.327607    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:40.327620    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:40.327631    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:40.327648    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:40.327660    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:40.327669    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:42.329628    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 10
	I0917 02:40:42.329641    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:42.329681    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:42.330488    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:42.330541    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:42.330552    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:42.330560    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:42.330568    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:42.330579    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:42.330588    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:42.330596    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:42.330609    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:42.330616    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:42.330623    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:42.330630    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:42.330646    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:42.330654    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:42.330660    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:42.330676    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:42.330688    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:42.330697    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:42.330703    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:42.330723    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:44.332685    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 11
	I0917 02:40:44.332699    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:44.332754    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:44.333499    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:44.333528    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:44.333541    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:44.333549    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:44.333561    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:44.333567    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:44.333581    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:44.333587    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:44.333597    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:44.333605    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:44.333611    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:44.333617    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:44.333630    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:44.333637    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:44.333643    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:44.333651    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:44.333658    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:44.333665    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:44.333672    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:44.333679    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:46.335437    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 12
	I0917 02:40:46.335449    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:46.335509    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:46.336294    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:46.336348    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:46.336358    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:46.336367    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:46.336372    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:46.336380    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:46.336390    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:46.336397    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:46.336403    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:46.336409    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:46.336418    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:46.336427    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:46.336435    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:46.336443    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:46.336457    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:46.336471    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:46.336485    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:46.336497    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:46.336505    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:46.336514    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:48.337784    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 13
	I0917 02:40:48.337799    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:48.337856    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:48.338634    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:48.338668    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:48.338682    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:48.338690    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:48.338697    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:48.338703    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:48.338710    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:48.338716    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:48.338726    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:48.338732    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:48.338740    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:48.338746    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:48.338753    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:48.338759    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:48.338767    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:48.338773    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:48.338789    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:48.338800    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:48.338808    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:48.338815    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:50.340009    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 14
	I0917 02:40:50.340022    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:50.340031    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:50.340805    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:50.340868    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:50.340876    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:50.340883    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:50.340888    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:50.340896    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:50.340905    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:50.340911    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:50.340924    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:50.340930    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:50.340938    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:50.340953    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:50.340975    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:50.340990    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:50.341002    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:50.341012    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:50.341020    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:50.341033    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:50.341047    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:50.341067    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:52.342586    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 15
	I0917 02:40:52.342599    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:52.342672    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:52.343573    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:52.343617    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:52.343627    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:52.343647    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:52.343653    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:52.343659    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:52.343665    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:52.343680    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:52.343692    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:52.343698    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:52.343707    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:52.343714    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:52.343722    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:52.343731    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:52.343738    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:52.343745    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:52.343751    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:52.343763    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:52.343771    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:52.343794    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:54.343946    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 16
	I0917 02:40:54.343957    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:54.344026    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:54.344857    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:54.344914    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:54.344923    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:54.344934    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:54.344944    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:54.344950    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:54.344958    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:54.344984    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:54.344995    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:54.345004    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:54.345013    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:54.345021    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:54.345027    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:54.345035    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:54.345041    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:54.345048    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:54.345055    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:54.345062    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:54.345069    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:54.345088    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:56.347100    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 17
	I0917 02:40:56.347115    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:56.347189    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:56.348041    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:56.348063    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:56.348076    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:56.348083    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:56.348099    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:56.348109    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:56.348114    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:56.348132    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:56.348143    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:56.348151    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:56.348157    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:56.348163    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:56.348170    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:56.348181    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:56.348193    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:56.348201    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:56.348207    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:56.348213    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:56.348228    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:56.348237    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:58.349251    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 18
	I0917 02:40:58.349263    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:58.349310    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:40:58.350147    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:40:58.350166    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:58.350199    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:58.350212    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:58.350219    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:58.350228    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:58.350235    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:58.350243    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:58.350249    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:58.350256    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:58.350264    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:58.350272    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:58.350289    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:58.350325    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:58.350335    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:58.350343    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:58.350365    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:58.350382    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:58.350395    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:58.350404    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:00.350766    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 19
	I0917 02:41:00.350781    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:00.350857    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:41:00.351680    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:41:00.351702    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:00.351720    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:00.351729    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:00.351735    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:00.351742    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:00.351750    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:00.351756    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:00.351763    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:00.351776    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:00.351784    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:00.351792    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:00.351799    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:00.351809    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:00.351816    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:00.351823    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:00.351835    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:00.351843    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:00.351850    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:00.351866    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:02.351965    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 20
	I0917 02:41:02.351981    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:02.352042    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:41:02.353009    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:41:02.353067    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:02.353082    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:02.353094    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:02.353105    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:02.353114    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:02.353121    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:02.353128    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:02.353135    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:02.353141    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:02.353147    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:02.353162    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:02.353173    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:02.353180    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:02.353187    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:02.353198    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:02.353208    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:02.353215    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:02.353221    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:02.353229    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:04.353783    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 21
	I0917 02:41:04.353798    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:04.353856    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:41:04.354673    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:41:04.354721    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:04.354740    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:04.354759    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:04.354781    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:04.354798    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:04.354810    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:04.354823    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:04.354831    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:04.354838    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:04.354845    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:04.354852    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:04.354858    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:04.354863    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:04.354869    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:04.354885    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:04.354898    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:04.354911    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:04.354924    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:04.354933    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:06.354946    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 22
	I0917 02:41:06.354960    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:06.355019    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:41:06.355786    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:41:06.355841    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:06.355852    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:06.355864    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:06.355874    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:06.355881    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:06.355887    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:06.355894    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:06.355900    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:06.355906    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:06.355913    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:06.355932    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:06.355945    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:06.355960    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:06.355972    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:06.355980    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:06.355991    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:06.355999    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:06.356007    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:06.356025    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:08.356706    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 23
	I0917 02:41:08.356725    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:08.356782    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:41:08.357660    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:41:08.357719    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:08.357726    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:08.357732    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:08.357737    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:08.357745    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:08.357750    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:08.357756    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:08.357762    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:08.357788    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:08.357801    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:08.357817    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:08.357829    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:08.357836    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:08.357844    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:08.357857    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:08.357865    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:08.357875    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:08.357883    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:08.357898    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:10.359026    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 24
	I0917 02:41:10.359042    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:10.359104    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:41:10.359946    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:41:10.359994    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:10.360004    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:10.360031    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:10.360042    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:10.360052    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:10.360061    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:10.360072    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:10.360091    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:10.360102    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:10.360109    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:10.360135    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:10.360147    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:10.360155    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:10.360172    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:10.360180    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:10.360188    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:10.360198    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:10.360206    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:10.360215    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:12.361691    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 25
	I0917 02:41:12.361706    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:12.361760    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:41:12.362535    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:41:12.362583    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:12.362593    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:12.362613    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:12.362619    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:12.362626    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:12.362641    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:12.362664    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:12.362680    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:12.362692    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:12.362702    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:12.362710    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:12.362719    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:12.362737    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:12.362750    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:12.362764    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:12.362772    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:12.362779    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:12.362785    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:12.362792    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:14.364801    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 26
	I0917 02:41:14.364816    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:14.364867    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:41:14.365663    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:41:14.365715    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:14.365724    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:14.365738    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:14.365747    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:14.365754    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:14.365759    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:14.365765    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:14.365781    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:14.365792    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:14.365799    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:14.365807    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:14.365815    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:14.365823    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:14.365830    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:14.365837    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:14.365844    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:14.365851    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:14.365863    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:14.365871    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:16.367289    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 27
	I0917 02:41:16.367319    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:16.367331    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:41:16.368086    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:41:16.368130    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:16.368143    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:16.368182    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:16.368196    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:16.368206    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:16.368213    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:16.368222    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:16.368230    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:16.368238    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:16.368244    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:16.368250    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:16.368257    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:16.368264    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:16.368271    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:16.368280    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:16.368291    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:16.368298    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:16.368309    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:16.368327    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:18.370385    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 28
	I0917 02:41:18.370396    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:18.370438    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:41:18.371334    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:41:18.371391    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:18.371404    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:18.371411    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:18.371418    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:18.371433    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:18.371445    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:18.371454    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:18.371460    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:18.371467    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:18.371490    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:18.371505    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:18.371513    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:18.371524    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:18.371534    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:18.371540    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:18.371546    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:18.371556    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:18.371572    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:18.371586    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:20.373545    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Attempt 29
	I0917 02:41:20.373560    6372 main.go:141] libmachine: (offline-docker-246000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:20.373630    6372 main.go:141] libmachine: (offline-docker-246000) DBG | hyperkit pid from json: 6584
	I0917 02:41:20.374425    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Searching for ca:dd:c9:c8:18:82 in /var/db/dhcpd_leases ...
	I0917 02:41:20.374448    6372 main.go:141] libmachine: (offline-docker-246000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:20.374459    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:20.374469    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:20.374475    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:20.374483    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:20.374492    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:20.374500    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:20.374507    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:20.374514    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:20.374522    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:20.374529    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:20.374534    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:20.374570    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:20.374578    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:20.374585    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:20.374592    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:20.374606    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:20.374620    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:20.374632    6372 main.go:141] libmachine: (offline-docker-246000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:22.375207    6372 client.go:171] duration metric: took 1m0.821966196s to LocalClient.Create
	I0917 02:41:24.377304    6372 start.go:128] duration metric: took 1m2.855396294s to createHost
	I0917 02:41:24.377335    6372 start.go:83] releasing machines lock for "offline-docker-246000", held for 1m2.855488797s
	W0917 02:41:24.377412    6372 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p offline-docker-246000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:dd:c9:c8:18:82
	* Failed to start hyperkit VM. Running "minikube delete -p offline-docker-246000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:dd:c9:c8:18:82
	I0917 02:41:24.440906    6372 out.go:201] 
	W0917 02:41:24.461933    6372 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:dd:c9:c8:18:82
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:dd:c9:c8:18:82
	W0917 02:41:24.461946    6372 out.go:270] * 
	* 
	W0917 02:41:24.462627    6372 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:41:24.524876    6372 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-246000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-17 02:41:24.637519 -0700 PDT m=+3830.758145322
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-246000 -n offline-docker-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-246000 -n offline-docker-246000: exit status 7 (84.102105ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 02:41:24.719654    6604 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0917 02:41:24.719674    6604 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-246000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "offline-docker-246000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-246000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-246000: (5.265993368s)
--- FAIL: TestOffline (195.38s)

                                                
                                    
x
+
TestAddons/parallel/Registry (74.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.621157ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-jc9c9" [5ebe9b61-99d8-42d6-9925-57fe4224f525] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004541004s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wx6wt" [2d8d4a63-2d55-49da-9763-4fb31b7dc6c9] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004207333s
addons_test.go:342: (dbg) Run:  kubectl --context addons-190000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-190000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-190000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.062931558s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-190000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p addons-190000 ip
2024/09/17 01:51:30 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p addons-190000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p addons-190000 -n addons-190000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p addons-190000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p addons-190000 logs -n 25: (2.678683391s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-222000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT |                     |
	|         | -p download-only-222000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT | 17 Sep 24 01:37 PDT |
	| delete  | -p download-only-222000              | download-only-222000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT | 17 Sep 24 01:37 PDT |
	| start   | -o=json --download-only              | download-only-405000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT |                     |
	|         | -p download-only-405000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 17 Sep 24 01:38 PDT | 17 Sep 24 01:38 PDT |
	| delete  | -p download-only-405000              | download-only-405000 | jenkins | v1.34.0 | 17 Sep 24 01:38 PDT | 17 Sep 24 01:38 PDT |
	| delete  | -p download-only-222000              | download-only-222000 | jenkins | v1.34.0 | 17 Sep 24 01:38 PDT | 17 Sep 24 01:38 PDT |
	| delete  | -p download-only-405000              | download-only-405000 | jenkins | v1.34.0 | 17 Sep 24 01:38 PDT | 17 Sep 24 01:38 PDT |
	| start   | --download-only -p                   | binary-mirror-670000 | jenkins | v1.34.0 | 17 Sep 24 01:38 PDT |                     |
	|         | binary-mirror-670000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49640               |                      |         |         |                     |                     |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-670000              | binary-mirror-670000 | jenkins | v1.34.0 | 17 Sep 24 01:38 PDT | 17 Sep 24 01:38 PDT |
	| addons  | enable dashboard -p                  | addons-190000        | jenkins | v1.34.0 | 17 Sep 24 01:38 PDT |                     |
	|         | addons-190000                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-190000        | jenkins | v1.34.0 | 17 Sep 24 01:38 PDT |                     |
	|         | addons-190000                        |                      |         |         |                     |                     |
	| start   | -p addons-190000 --wait=true         | addons-190000        | jenkins | v1.34.0 | 17 Sep 24 01:38 PDT | 17 Sep 24 01:41 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=hyperkit  --addons=ingress  |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | addons-190000 addons disable         | addons-190000        | jenkins | v1.34.0 | 17 Sep 24 01:42 PDT | 17 Sep 24 01:42 PDT |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-190000 addons                 | addons-190000        | jenkins | v1.34.0 | 17 Sep 24 01:51 PDT | 17 Sep 24 01:51 PDT |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-190000 addons                 | addons-190000        | jenkins | v1.34.0 | 17 Sep 24 01:51 PDT | 17 Sep 24 01:51 PDT |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-190000 addons disable         | addons-190000        | jenkins | v1.34.0 | 17 Sep 24 01:51 PDT | 17 Sep 24 01:51 PDT |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-190000        | jenkins | v1.34.0 | 17 Sep 24 01:51 PDT | 17 Sep 24 01:51 PDT |
	|         | -p addons-190000                     |                      |         |         |                     |                     |
	| ip      | addons-190000 ip                     | addons-190000        | jenkins | v1.34.0 | 17 Sep 24 01:51 PDT | 17 Sep 24 01:51 PDT |
	| addons  | addons-190000 addons disable         | addons-190000        | jenkins | v1.34.0 | 17 Sep 24 01:51 PDT | 17 Sep 24 01:51 PDT |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 01:38:05
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 01:38:05.252332    1643 out.go:345] Setting OutFile to fd 1 ...
	I0917 01:38:05.252503    1643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:38:05.252508    1643 out.go:358] Setting ErrFile to fd 2...
	I0917 01:38:05.252512    1643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:38:05.252669    1643 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 01:38:05.254936    1643 out.go:352] Setting JSON to false
	I0917 01:38:05.278897    1643 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":455,"bootTime":1726561830,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 01:38:05.279046    1643 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 01:38:05.300360    1643 out.go:177] * [addons-190000] minikube v1.34.0 on Darwin 14.6.1
	I0917 01:38:05.342207    1643 notify.go:220] Checking for updates...
	I0917 01:38:05.364277    1643 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 01:38:05.408068    1643 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 01:38:05.452117    1643 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 01:38:05.495178    1643 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:38:05.540945    1643 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 01:38:05.583154    1643 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:38:05.604416    1643 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 01:38:05.634217    1643 out.go:177] * Using the hyperkit driver based on user configuration
	I0917 01:38:05.676111    1643 start.go:297] selected driver: hyperkit
	I0917 01:38:05.676138    1643 start.go:901] validating driver "hyperkit" against <nil>
	I0917 01:38:05.676155    1643 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:38:05.680222    1643 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:38:05.680343    1643 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19648-1025/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 01:38:05.688822    1643 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 01:38:05.692907    1643 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:05.692933    1643 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 01:38:05.692964    1643 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 01:38:05.693204    1643 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 01:38:05.693238    1643 cni.go:84] Creating CNI manager for ""
	I0917 01:38:05.693279    1643 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 01:38:05.693284    1643 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 01:38:05.693350    1643 start.go:340] cluster config:
	{Name:addons-190000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-190000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:38:05.693433    1643 iso.go:125] acquiring lock: {Name:mkc407d80a0f8e78f0e24d63c464bb315c80139b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:38:05.735178    1643 out.go:177] * Starting "addons-190000" primary control-plane node in "addons-190000" cluster
	I0917 01:38:05.758006    1643 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 01:38:05.758073    1643 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 01:38:05.758096    1643 cache.go:56] Caching tarball of preloaded images
	I0917 01:38:05.758282    1643 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 01:38:05.758299    1643 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 01:38:05.758855    1643 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/config.json ...
	I0917 01:38:05.758893    1643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/config.json: {Name:mk1d5b9f506da5cc8847850731972e7c1491e7fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:05.759635    1643 start.go:360] acquireMachinesLock for addons-190000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 01:38:05.759833    1643 start.go:364] duration metric: took 175.854µs to acquireMachinesLock for "addons-190000"
	I0917 01:38:05.759877    1643 start.go:93] Provisioning new machine with config: &{Name:addons-190000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:addons-190000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 01:38:05.759961    1643 start.go:125] createHost starting for "" (driver="hyperkit")
	I0917 01:38:05.780356    1643 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0917 01:38:05.780686    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:05.780757    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:05.790799    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49647
	I0917 01:38:05.791144    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:05.791573    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:05.791588    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:05.791801    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:05.791908    1643 main.go:141] libmachine: (addons-190000) Calling .GetMachineName
	I0917 01:38:05.792016    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:05.792110    1643 start.go:159] libmachine.API.Create for "addons-190000" (driver="hyperkit")
	I0917 01:38:05.792140    1643 client.go:168] LocalClient.Create starting
	I0917 01:38:05.792176    1643 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem
	I0917 01:38:05.921916    1643 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem
	I0917 01:38:06.120447    1643 main.go:141] libmachine: Running pre-create checks...
	I0917 01:38:06.120460    1643 main.go:141] libmachine: (addons-190000) Calling .PreCreateCheck
	I0917 01:38:06.120707    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:06.120881    1643 main.go:141] libmachine: (addons-190000) Calling .GetConfigRaw
	I0917 01:38:06.121391    1643 main.go:141] libmachine: Creating machine...
	I0917 01:38:06.121405    1643 main.go:141] libmachine: (addons-190000) Calling .Create
	I0917 01:38:06.121505    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:06.121660    1643 main.go:141] libmachine: (addons-190000) DBG | I0917 01:38:06.121507    1652 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 01:38:06.121766    1643 main.go:141] libmachine: (addons-190000) Downloading /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1025/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0917 01:38:06.392856    1643 main.go:141] libmachine: (addons-190000) DBG | I0917 01:38:06.392723    1652 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa...
	I0917 01:38:06.430393    1643 main.go:141] libmachine: (addons-190000) DBG | I0917 01:38:06.430318    1652 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/addons-190000.rawdisk...
	I0917 01:38:06.430408    1643 main.go:141] libmachine: (addons-190000) DBG | Writing magic tar header
	I0917 01:38:06.430416    1643 main.go:141] libmachine: (addons-190000) DBG | Writing SSH key tar header
	I0917 01:38:06.430875    1643 main.go:141] libmachine: (addons-190000) DBG | I0917 01:38:06.430836    1652 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000 ...
	I0917 01:38:06.960207    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:06.960234    1643 main.go:141] libmachine: (addons-190000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/hyperkit.pid
	I0917 01:38:06.960316    1643 main.go:141] libmachine: (addons-190000) DBG | Using UUID d98bee8a-31fb-4314-adb0-d8da382f990b
	I0917 01:38:07.247576    1643 main.go:141] libmachine: (addons-190000) DBG | Generated MAC 32:ad:62:91:12:32
	I0917 01:38:07.247601    1643 main.go:141] libmachine: (addons-190000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-190000
	I0917 01:38:07.247641    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:07 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d98bee8a-31fb-4314-adb0-d8da382f990b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00019a630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 01:38:07.247678    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:07 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d98bee8a-31fb-4314-adb0-d8da382f990b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00019a630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 01:38:07.247741    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:07 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/hyperkit.pid", "-c", "2", "-m", "4000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "d98bee8a-31fb-4314-adb0-d8da382f990b", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/addons-190000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machine
s/addons-190000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-190000"}
	I0917 01:38:07.247780    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:07 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/hyperkit.pid -c 2 -m 4000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U d98bee8a-31fb-4314-adb0-d8da382f990b -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/addons-190000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-190000"
	I0917 01:38:07.247792    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:07 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 01:38:07.250817    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:07 DEBUG: hyperkit: Pid is 1658
	I0917 01:38:07.251236    1643 main.go:141] libmachine: (addons-190000) DBG | Attempt 0
	I0917 01:38:07.251252    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:07.251304    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:07.252206    1643 main.go:141] libmachine: (addons-190000) DBG | Searching for 32:ad:62:91:12:32 in /var/db/dhcpd_leases ...
	I0917 01:38:07.269016    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:07 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I0917 01:38:07.337144    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:07 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 01:38:07.337819    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 01:38:07.337839    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 01:38:07.337867    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 01:38:07.337883    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 01:38:07.887353    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:07 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 01:38:07.887366    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:07 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 01:38:08.004033    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 01:38:08.004056    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 01:38:08.004077    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 01:38:08.004100    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 01:38:08.004932    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:08 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 01:38:08.004941    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:08 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 01:38:09.252488    1643 main.go:141] libmachine: (addons-190000) DBG | Attempt 1
	I0917 01:38:09.252509    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:09.252612    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:09.253405    1643 main.go:141] libmachine: (addons-190000) DBG | Searching for 32:ad:62:91:12:32 in /var/db/dhcpd_leases ...
	I0917 01:38:11.253557    1643 main.go:141] libmachine: (addons-190000) DBG | Attempt 2
	I0917 01:38:11.253573    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:11.253659    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:11.254511    1643 main.go:141] libmachine: (addons-190000) DBG | Searching for 32:ad:62:91:12:32 in /var/db/dhcpd_leases ...
	I0917 01:38:13.254651    1643 main.go:141] libmachine: (addons-190000) DBG | Attempt 3
	I0917 01:38:13.254664    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:13.254760    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:13.255600    1643 main.go:141] libmachine: (addons-190000) DBG | Searching for 32:ad:62:91:12:32 in /var/db/dhcpd_leases ...
	I0917 01:38:13.792230    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:13 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 01:38:13.792296    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:13 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 01:38:13.792303    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:13 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 01:38:13.811796    1643 main.go:141] libmachine: (addons-190000) DBG | 2024/09/17 01:38:13 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 01:38:15.255729    1643 main.go:141] libmachine: (addons-190000) DBG | Attempt 4
	I0917 01:38:15.255742    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:15.255796    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:15.256585    1643 main.go:141] libmachine: (addons-190000) DBG | Searching for 32:ad:62:91:12:32 in /var/db/dhcpd_leases ...
	I0917 01:38:17.256744    1643 main.go:141] libmachine: (addons-190000) DBG | Attempt 5
	I0917 01:38:17.256761    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:17.256870    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:17.257911    1643 main.go:141] libmachine: (addons-190000) DBG | Searching for 32:ad:62:91:12:32 in /var/db/dhcpd_leases ...
	I0917 01:38:17.257969    1643 main.go:141] libmachine: (addons-190000) DBG | Found 1 entries in /var/db/dhcpd_leases!
	I0917 01:38:17.257982    1643 main.go:141] libmachine: (addons-190000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 01:38:17.257996    1643 main.go:141] libmachine: (addons-190000) DBG | Found match: 32:ad:62:91:12:32
	I0917 01:38:17.258006    1643 main.go:141] libmachine: (addons-190000) DBG | IP: 192.169.0.2
	I0917 01:38:17.258073    1643 main.go:141] libmachine: (addons-190000) Calling .GetConfigRaw
	I0917 01:38:17.258854    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:17.258988    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:17.259116    1643 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0917 01:38:17.259127    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:17.259233    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:17.259297    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:17.260123    1643 main.go:141] libmachine: Detecting operating system of created instance...
	I0917 01:38:17.260133    1643 main.go:141] libmachine: Waiting for SSH to be available...
	I0917 01:38:17.260137    1643 main.go:141] libmachine: Getting to WaitForSSH function...
	I0917 01:38:17.260141    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:17.260224    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:17.260313    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:17.260407    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:17.260494    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:17.261192    1643 main.go:141] libmachine: Using SSH client type: native
	I0917 01:38:17.261352    1643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7e1a820] 0x7e1d500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0917 01:38:17.261360    1643 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0917 01:38:18.263190    1643 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.2:22: connect: connection refused
	I0917 01:38:21.316784    1643 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 01:38:21.316799    1643 main.go:141] libmachine: Detecting the provisioner...
	I0917 01:38:21.316808    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:21.316957    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:21.317058    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:21.317143    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:21.317241    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:21.317364    1643 main.go:141] libmachine: Using SSH client type: native
	I0917 01:38:21.317521    1643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7e1a820] 0x7e1d500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0917 01:38:21.317530    1643 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0917 01:38:21.372558    1643 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0917 01:38:21.372630    1643 main.go:141] libmachine: found compatible host: buildroot
	I0917 01:38:21.372637    1643 main.go:141] libmachine: Provisioning with buildroot...
	I0917 01:38:21.372643    1643 main.go:141] libmachine: (addons-190000) Calling .GetMachineName
	I0917 01:38:21.372786    1643 buildroot.go:166] provisioning hostname "addons-190000"
	I0917 01:38:21.372797    1643 main.go:141] libmachine: (addons-190000) Calling .GetMachineName
	I0917 01:38:21.372893    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:21.372993    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:21.373082    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:21.373169    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:21.373263    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:21.373386    1643 main.go:141] libmachine: Using SSH client type: native
	I0917 01:38:21.373517    1643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7e1a820] 0x7e1d500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0917 01:38:21.373526    1643 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-190000 && echo "addons-190000" | sudo tee /etc/hostname
	I0917 01:38:21.436673    1643 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-190000
	
	I0917 01:38:21.436694    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:21.436823    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:21.436924    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:21.437026    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:21.437114    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:21.437234    1643 main.go:141] libmachine: Using SSH client type: native
	I0917 01:38:21.437375    1643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7e1a820] 0x7e1d500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0917 01:38:21.437389    1643 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-190000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-190000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-190000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 01:38:21.495880    1643 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 01:38:21.495898    1643 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 01:38:21.495915    1643 buildroot.go:174] setting up certificates
	I0917 01:38:21.495925    1643 provision.go:84] configureAuth start
	I0917 01:38:21.495932    1643 main.go:141] libmachine: (addons-190000) Calling .GetMachineName
	I0917 01:38:21.496071    1643 main.go:141] libmachine: (addons-190000) Calling .GetIP
	I0917 01:38:21.496162    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:21.496247    1643 provision.go:143] copyHostCerts
	I0917 01:38:21.496349    1643 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 01:38:21.496898    1643 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 01:38:21.497092    1643 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 01:38:21.497237    1643 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.addons-190000 san=[127.0.0.1 192.169.0.2 addons-190000 localhost minikube]
	I0917 01:38:21.560031    1643 provision.go:177] copyRemoteCerts
	I0917 01:38:21.560461    1643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 01:38:21.560488    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:21.560638    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:21.560733    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:21.560828    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:21.560952    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:21.595834    1643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 01:38:21.616883    1643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 01:38:21.637415    1643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 01:38:21.658314    1643 provision.go:87] duration metric: took 162.375343ms to configureAuth
	I0917 01:38:21.658336    1643 buildroot.go:189] setting minikube options for container-runtime
	I0917 01:38:21.658481    1643 config.go:182] Loaded profile config "addons-190000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 01:38:21.658498    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:21.658648    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:21.658754    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:21.658873    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:21.658972    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:21.659059    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:21.659184    1643 main.go:141] libmachine: Using SSH client type: native
	I0917 01:38:21.659315    1643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7e1a820] 0x7e1d500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0917 01:38:21.659323    1643 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 01:38:21.714301    1643 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 01:38:21.714314    1643 buildroot.go:70] root file system type: tmpfs
	I0917 01:38:21.714390    1643 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 01:38:21.714406    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:21.714538    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:21.714615    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:21.714702    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:21.714793    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:21.714926    1643 main.go:141] libmachine: Using SSH client type: native
	I0917 01:38:21.715058    1643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7e1a820] 0x7e1d500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0917 01:38:21.715105    1643 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 01:38:21.780934    1643 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 01:38:21.780957    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:21.781092    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:21.781179    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:21.781271    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:21.781352    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:21.781485    1643 main.go:141] libmachine: Using SSH client type: native
	I0917 01:38:21.781630    1643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7e1a820] 0x7e1d500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0917 01:38:21.781642    1643 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 01:38:23.361586    1643 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 01:38:23.361601    1643 main.go:141] libmachine: Checking connection to Docker...
	I0917 01:38:23.361613    1643 main.go:141] libmachine: (addons-190000) Calling .GetURL
	I0917 01:38:23.361775    1643 main.go:141] libmachine: Docker is up and running!
	I0917 01:38:23.361783    1643 main.go:141] libmachine: Reticulating splines...
	I0917 01:38:23.361788    1643 client.go:171] duration metric: took 17.569438716s to LocalClient.Create
	I0917 01:38:23.361801    1643 start.go:167] duration metric: took 17.569488121s to libmachine.API.Create "addons-190000"
	I0917 01:38:23.361814    1643 start.go:293] postStartSetup for "addons-190000" (driver="hyperkit")
	I0917 01:38:23.361823    1643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 01:38:23.361833    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:23.362013    1643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 01:38:23.362032    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:23.362165    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:23.362275    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:23.362378    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:23.362468    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:23.403108    1643 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 01:38:23.406622    1643 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 01:38:23.406640    1643 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 01:38:23.406734    1643 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 01:38:23.406786    1643 start.go:296] duration metric: took 44.965484ms for postStartSetup
	I0917 01:38:23.406807    1643 main.go:141] libmachine: (addons-190000) Calling .GetConfigRaw
	I0917 01:38:23.407384    1643 main.go:141] libmachine: (addons-190000) Calling .GetIP
	I0917 01:38:23.407519    1643 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/config.json ...
	I0917 01:38:23.407870    1643 start.go:128] duration metric: took 17.647692006s to createHost
	I0917 01:38:23.407884    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:23.407994    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:23.408075    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:23.408160    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:23.408257    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:23.408374    1643 main.go:141] libmachine: Using SSH client type: native
	I0917 01:38:23.408499    1643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7e1a820] 0x7e1d500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0917 01:38:23.408507    1643 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 01:38:23.467230    1643 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726562302.554942578
	
	I0917 01:38:23.467243    1643 fix.go:216] guest clock: 1726562302.554942578
	I0917 01:38:23.467248    1643 fix.go:229] Guest: 2024-09-17 01:38:22.554942578 -0700 PDT Remote: 2024-09-17 01:38:23.407878 -0700 PDT m=+18.190949984 (delta=-852.935422ms)
	I0917 01:38:23.467265    1643 fix.go:200] guest clock delta is within tolerance: -852.935422ms
	I0917 01:38:23.467269    1643 start.go:83] releasing machines lock for "addons-190000", held for 17.707219688s
	I0917 01:38:23.467284    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:23.467416    1643 main.go:141] libmachine: (addons-190000) Calling .GetIP
	I0917 01:38:23.467526    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:23.467830    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:23.467930    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:23.468076    1643 ssh_runner.go:195] Run: cat /version.json
	I0917 01:38:23.468090    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:23.468174    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:23.468190    1643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 01:38:23.468215    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:23.468282    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:23.468296    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:23.468381    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:23.468399    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:23.468474    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:23.468493    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:23.468559    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:23.499934    1643 ssh_runner.go:195] Run: systemctl --version
	I0917 01:38:23.568522    1643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 01:38:23.573636    1643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 01:38:23.573684    1643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:38:23.586799    1643 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 01:38:23.586812    1643 start.go:495] detecting cgroup driver to use...
	I0917 01:38:23.586936    1643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 01:38:23.601692    1643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 01:38:23.611319    1643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 01:38:23.620258    1643 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 01:38:23.620313    1643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 01:38:23.628886    1643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 01:38:23.637225    1643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 01:38:23.645362    1643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 01:38:23.654442    1643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 01:38:23.664055    1643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 01:38:23.673263    1643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 01:38:23.681460    1643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 01:38:23.689693    1643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 01:38:23.697076    1643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 01:38:23.705262    1643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:38:23.801836    1643 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 01:38:23.821921    1643 start.go:495] detecting cgroup driver to use...
	I0917 01:38:23.821992    1643 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 01:38:23.836253    1643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 01:38:23.850812    1643 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 01:38:23.864790    1643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 01:38:23.877210    1643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 01:38:23.888269    1643 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 01:38:23.913283    1643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 01:38:23.925125    1643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 01:38:23.941110    1643 ssh_runner.go:195] Run: which cri-dockerd
	I0917 01:38:23.943950    1643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 01:38:23.952091    1643 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 01:38:23.966782    1643 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 01:38:24.067382    1643 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 01:38:24.172406    1643 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 01:38:24.172478    1643 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 01:38:24.187125    1643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:38:24.306997    1643 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 01:38:26.668026    1643 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.360982544s)
	I0917 01:38:26.668081    1643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 01:38:26.680090    1643 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 01:38:26.694262    1643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 01:38:26.706427    1643 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 01:38:26.815519    1643 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 01:38:26.916122    1643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:38:27.025466    1643 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 01:38:27.038738    1643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 01:38:27.050340    1643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:38:27.177902    1643 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 01:38:27.240252    1643 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 01:38:27.240814    1643 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 01:38:27.245354    1643 start.go:563] Will wait 60s for crictl version
	I0917 01:38:27.245405    1643 ssh_runner.go:195] Run: which crictl
	I0917 01:38:27.250894    1643 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:38:27.277684    1643 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 01:38:27.277772    1643 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 01:38:27.294686    1643 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 01:38:27.346328    1643 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 01:38:27.346388    1643 main.go:141] libmachine: (addons-190000) Calling .GetIP
	I0917 01:38:27.347421    1643 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 01:38:27.351935    1643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:38:27.362642    1643 kubeadm.go:883] updating cluster {Name:addons-190000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:addons-190000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 01:38:27.362719    1643 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 01:38:27.362793    1643 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 01:38:27.376616    1643 docker.go:685] Got preloaded images: 
	I0917 01:38:27.376628    1643 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0917 01:38:27.376683    1643 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 01:38:27.384499    1643 ssh_runner.go:195] Run: which lz4
	I0917 01:38:27.387475    1643 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 01:38:27.390559    1643 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 01:38:27.390575    1643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I0917 01:38:28.347004    1643 docker.go:649] duration metric: took 959.574641ms to copy over tarball
	I0917 01:38:28.347084    1643 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 01:38:31.471552    1643 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.124413498s)
	I0917 01:38:31.471566    1643 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 01:38:31.497504    1643 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 01:38:31.506142    1643 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0917 01:38:31.520900    1643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:38:31.626149    1643 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 01:38:34.060526    1643 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.434329497s)
	I0917 01:38:34.060617    1643 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 01:38:34.075162    1643 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 01:38:34.075184    1643 cache_images.go:84] Images are preloaded, skipping loading
	I0917 01:38:34.075190    1643 kubeadm.go:934] updating node { 192.169.0.2 8443 v1.31.1 docker true true} ...
	I0917 01:38:34.075277    1643 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-190000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-190000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 01:38:34.075373    1643 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 01:38:34.111022    1643 cni.go:84] Creating CNI manager for ""
	I0917 01:38:34.111037    1643 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 01:38:34.111049    1643 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 01:38:34.111065    1643 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-190000 NodeName:addons-190000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 01:38:34.111159    1643 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-190000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 01:38:34.111221    1643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 01:38:34.120267    1643 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 01:38:34.120317    1643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 01:38:34.128959    1643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 01:38:34.142534    1643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 01:38:34.156738    1643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0917 01:38:34.172126    1643 ssh_runner.go:195] Run: grep 192.169.0.2	control-plane.minikube.internal$ /etc/hosts
	I0917 01:38:34.175322    1643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:38:34.184956    1643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:38:34.291287    1643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 01:38:34.307308    1643 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000 for IP: 192.169.0.2
	I0917 01:38:34.307323    1643 certs.go:194] generating shared ca certs ...
	I0917 01:38:34.307342    1643 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:34.307522    1643 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 01:38:34.394952    1643 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt ...
	I0917 01:38:34.394969    1643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt: {Name:mk165d4dbf3121b1062354535b8c7b0f4bcd1362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:34.395250    1643 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key ...
	I0917 01:38:34.395258    1643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key: {Name:mkf4cdbdb44b84ad9ca5568ef6aef89a9790f799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:34.395459    1643 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 01:38:34.453796    1643 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt ...
	I0917 01:38:34.453809    1643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt: {Name:mk2be75dc49d75c7c7d58ccd9dd47c707cef7be3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:34.454097    1643 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key ...
	I0917 01:38:34.454105    1643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key: {Name:mk755b7634b041c09cd41f84bff62d1aca27c7ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:34.454344    1643 certs.go:256] generating profile certs ...
	I0917 01:38:34.454395    1643 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.key
	I0917 01:38:34.454410    1643 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt with IP's: []
	I0917 01:38:34.564528    1643 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt ...
	I0917 01:38:34.564543    1643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: {Name:mk04f653cdb38a0b5457479cbe0e8b5988b59e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:34.564987    1643 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.key ...
	I0917 01:38:34.565000    1643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.key: {Name:mk0f20e1d02e7eeede35c2301855d45ef64c3cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:34.565214    1643 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/apiserver.key.56746c3c
	I0917 01:38:34.565234    1643 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/apiserver.crt.56746c3c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.2]
	I0917 01:38:34.710251    1643 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/apiserver.crt.56746c3c ...
	I0917 01:38:34.710267    1643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/apiserver.crt.56746c3c: {Name:mkda86f131d5ddec5e34d1b42841e0ca1fb860ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:34.710560    1643 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/apiserver.key.56746c3c ...
	I0917 01:38:34.710572    1643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/apiserver.key.56746c3c: {Name:mk53472cd52e106bcec7caf1205c00d3677f43b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:34.710807    1643 certs.go:381] copying /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/apiserver.crt.56746c3c -> /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/apiserver.crt
	I0917 01:38:34.710998    1643 certs.go:385] copying /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/apiserver.key.56746c3c -> /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/apiserver.key
	I0917 01:38:34.711168    1643 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/proxy-client.key
	I0917 01:38:34.711190    1643 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/proxy-client.crt with IP's: []
	I0917 01:38:34.771384    1643 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/proxy-client.crt ...
	I0917 01:38:34.771395    1643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/proxy-client.crt: {Name:mk9dde3892b3c84a9e9cb2a9a495d6d16e1bb975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:34.771666    1643 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/proxy-client.key ...
	I0917 01:38:34.771673    1643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/proxy-client.key: {Name:mk855de47e4e7b471142add77a4055272c07a71c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:34.772119    1643 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 01:38:34.772168    1643 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 01:38:34.772220    1643 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 01:38:34.772262    1643 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 01:38:34.772758    1643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 01:38:34.793136    1643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 01:38:34.813454    1643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 01:38:34.833200    1643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 01:38:34.853196    1643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 01:38:34.874371    1643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 01:38:34.893410    1643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 01:38:34.914083    1643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 01:38:34.934157    1643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 01:38:34.954848    1643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 01:38:34.982443    1643 ssh_runner.go:195] Run: openssl version
	I0917 01:38:34.987844    1643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 01:38:35.000434    1643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:38:35.004175    1643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17  2024 /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:38:35.004222    1643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:38:35.008958    1643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 01:38:35.018712    1643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 01:38:35.021969    1643 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 01:38:35.022010    1643 kubeadm.go:392] StartCluster: {Name:addons-190000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-190000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:38:35.022112    1643 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 01:38:35.033168    1643 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 01:38:35.041539    1643 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 01:38:35.049763    1643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 01:38:35.058824    1643 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 01:38:35.058835    1643 kubeadm.go:157] found existing configuration files:
	
	I0917 01:38:35.058876    1643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 01:38:35.068168    1643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 01:38:35.068228    1643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 01:38:35.076789    1643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 01:38:35.084625    1643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 01:38:35.084674    1643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 01:38:35.092611    1643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 01:38:35.100454    1643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 01:38:35.100498    1643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 01:38:35.109193    1643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 01:38:35.117702    1643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 01:38:35.117746    1643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 01:38:35.126330    1643 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 01:38:35.160832    1643 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 01:38:35.160906    1643 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 01:38:35.235916    1643 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 01:38:35.236011    1643 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 01:38:35.236119    1643 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 01:38:35.244611    1643 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 01:38:35.289370    1643 out.go:235]   - Generating certificates and keys ...
	I0917 01:38:35.289446    1643 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 01:38:35.289502    1643 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 01:38:35.519370    1643 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 01:38:35.637335    1643 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 01:38:35.724916    1643 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 01:38:35.967144    1643 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 01:38:36.081801    1643 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 01:38:36.082024    1643 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-190000 localhost] and IPs [192.169.0.2 127.0.0.1 ::1]
	I0917 01:38:36.581373    1643 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 01:38:36.581509    1643 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-190000 localhost] and IPs [192.169.0.2 127.0.0.1 ::1]
	I0917 01:38:36.741727    1643 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 01:38:36.907984    1643 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 01:38:37.400704    1643 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 01:38:37.400776    1643 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 01:38:37.493950    1643 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 01:38:37.754054    1643 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 01:38:37.909962    1643 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 01:38:38.064611    1643 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 01:38:38.319430    1643 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 01:38:38.319918    1643 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 01:38:38.321816    1643 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 01:38:38.343335    1643 out.go:235]   - Booting up control plane ...
	I0917 01:38:38.343420    1643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 01:38:38.343484    1643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 01:38:38.343541    1643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 01:38:38.343630    1643 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 01:38:38.349760    1643 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 01:38:38.349797    1643 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 01:38:38.474336    1643 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 01:38:38.474423    1643 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 01:38:39.474382    1643 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000916427s
	I0917 01:38:39.474467    1643 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 01:38:43.973631    1643 kubeadm.go:310] [api-check] The API server is healthy after 4.502269256s
	I0917 01:38:43.982966    1643 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 01:38:43.991691    1643 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 01:38:44.024200    1643 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 01:38:44.024352    1643 kubeadm.go:310] [mark-control-plane] Marking the node addons-190000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 01:38:44.038478    1643 kubeadm.go:310] [bootstrap-token] Using token: tkbl4x.azc3nslt77ktsgqe
	I0917 01:38:44.072495    1643 out.go:235]   - Configuring RBAC rules ...
	I0917 01:38:44.072703    1643 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 01:38:44.102237    1643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 01:38:44.106440    1643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 01:38:44.108531    1643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 01:38:44.110818    1643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 01:38:44.113459    1643 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 01:38:44.391677    1643 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 01:38:44.792189    1643 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 01:38:45.378007    1643 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 01:38:45.378814    1643 kubeadm.go:310] 
	I0917 01:38:45.378892    1643 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 01:38:45.378905    1643 kubeadm.go:310] 
	I0917 01:38:45.378981    1643 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 01:38:45.378986    1643 kubeadm.go:310] 
	I0917 01:38:45.379009    1643 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 01:38:45.379056    1643 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 01:38:45.379127    1643 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 01:38:45.379137    1643 kubeadm.go:310] 
	I0917 01:38:45.379189    1643 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 01:38:45.379198    1643 kubeadm.go:310] 
	I0917 01:38:45.379242    1643 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 01:38:45.379251    1643 kubeadm.go:310] 
	I0917 01:38:45.379300    1643 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 01:38:45.379358    1643 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 01:38:45.379416    1643 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 01:38:45.379424    1643 kubeadm.go:310] 
	I0917 01:38:45.379488    1643 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 01:38:45.379550    1643 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 01:38:45.379557    1643 kubeadm.go:310] 
	I0917 01:38:45.379621    1643 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tkbl4x.azc3nslt77ktsgqe \
	I0917 01:38:45.379710    1643 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7d55cb80c4cca3ccf2edf1a375f9019832d41d9684c82d992c80cf0c3419888b \
	I0917 01:38:45.379734    1643 kubeadm.go:310] 	--control-plane 
	I0917 01:38:45.379740    1643 kubeadm.go:310] 
	I0917 01:38:45.379816    1643 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 01:38:45.379823    1643 kubeadm.go:310] 
	I0917 01:38:45.379888    1643 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tkbl4x.azc3nslt77ktsgqe \
	I0917 01:38:45.379977    1643 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7d55cb80c4cca3ccf2edf1a375f9019832d41d9684c82d992c80cf0c3419888b 
	I0917 01:38:45.380815    1643 kubeadm.go:310] W0917 08:38:34.255237    1587 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 01:38:45.381052    1643 kubeadm.go:310] W0917 08:38:34.255771    1587 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 01:38:45.381141    1643 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 01:38:45.381152    1643 cni.go:84] Creating CNI manager for ""
	I0917 01:38:45.381161    1643 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 01:38:45.403271    1643 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 01:38:45.423084    1643 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 01:38:45.430847    1643 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 01:38:45.444432    1643 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 01:38:45.444507    1643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:45.444506    1643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-190000 minikube.k8s.io/updated_at=2024_09_17T01_38_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61 minikube.k8s.io/name=addons-190000 minikube.k8s.io/primary=true
	I0917 01:38:45.458903    1643 ops.go:34] apiserver oom_adj: -16
	I0917 01:38:45.524769    1643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:46.025767    1643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:46.526995    1643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:47.026079    1643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:47.527021    1643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:48.026991    1643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:48.525365    1643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:49.025426    1643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:49.525531    1643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:50.025797    1643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:50.525679    1643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:50.608073    1643 kubeadm.go:1113] duration metric: took 5.163568906s to wait for elevateKubeSystemPrivileges
	I0917 01:38:50.608096    1643 kubeadm.go:394] duration metric: took 15.585910358s to StartCluster
	I0917 01:38:50.608112    1643 settings.go:142] acquiring lock: {Name:mk5de70c670a23cf49fdbd19ddb5c1d2a9de9a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:50.608268    1643 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 01:38:50.608499    1643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:50.609080    1643 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 01:38:50.609088    1643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 01:38:50.609109    1643 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0917 01:38:50.609189    1643 addons.go:69] Setting helm-tiller=true in profile "addons-190000"
	I0917 01:38:50.609200    1643 addons.go:69] Setting registry=true in profile "addons-190000"
	I0917 01:38:50.609211    1643 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-190000"
	I0917 01:38:50.609227    1643 addons.go:234] Setting addon registry=true in "addons-190000"
	I0917 01:38:50.609239    1643 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-190000"
	I0917 01:38:50.609244    1643 config.go:182] Loaded profile config "addons-190000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 01:38:50.609255    1643 host.go:66] Checking if "addons-190000" exists ...
	I0917 01:38:50.609255    1643 addons.go:69] Setting gcp-auth=true in profile "addons-190000"
	I0917 01:38:50.609263    1643 host.go:66] Checking if "addons-190000" exists ...
	I0917 01:38:50.609249    1643 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-190000"
	I0917 01:38:50.609260    1643 addons.go:69] Setting cloud-spanner=true in profile "addons-190000"
	I0917 01:38:50.609287    1643 mustload.go:65] Loading cluster: addons-190000
	I0917 01:38:50.609295    1643 addons.go:234] Setting addon cloud-spanner=true in "addons-190000"
	I0917 01:38:50.609298    1643 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-190000"
	I0917 01:38:50.609266    1643 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-190000"
	I0917 01:38:50.609247    1643 addons.go:69] Setting default-storageclass=true in profile "addons-190000"
	I0917 01:38:50.609323    1643 host.go:66] Checking if "addons-190000" exists ...
	I0917 01:38:50.609334    1643 host.go:66] Checking if "addons-190000" exists ...
	I0917 01:38:50.609333    1643 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-190000"
	I0917 01:38:50.609334    1643 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-190000"
	I0917 01:38:50.609304    1643 addons.go:69] Setting storage-provisioner=true in profile "addons-190000"
	I0917 01:38:50.609375    1643 addons.go:234] Setting addon storage-provisioner=true in "addons-190000"
	I0917 01:38:50.609421    1643 host.go:66] Checking if "addons-190000" exists ...
	I0917 01:38:50.609437    1643 config.go:182] Loaded profile config "addons-190000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 01:38:50.609504    1643 addons.go:69] Setting ingress-dns=true in profile "addons-190000"
	I0917 01:38:50.609537    1643 addons.go:234] Setting addon ingress-dns=true in "addons-190000"
	I0917 01:38:50.609586    1643 host.go:66] Checking if "addons-190000" exists ...
	I0917 01:38:50.609582    1643 addons.go:69] Setting ingress=true in profile "addons-190000"
	I0917 01:38:50.609636    1643 addons.go:234] Setting addon ingress=true in "addons-190000"
	I0917 01:38:50.609669    1643 host.go:66] Checking if "addons-190000" exists ...
	I0917 01:38:50.609684    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.609693    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.609239    1643 addons.go:234] Setting addon helm-tiller=true in "addons-190000"
	I0917 01:38:50.609707    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.609714    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.609725    1643 host.go:66] Checking if "addons-190000" exists ...
	I0917 01:38:50.609765    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.609767    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.609789    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.609796    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.609797    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.609824    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.609826    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.609191    1643 addons.go:69] Setting yakd=true in profile "addons-190000"
	I0917 01:38:50.609847    1643 addons.go:234] Setting addon yakd=true in "addons-190000"
	I0917 01:38:50.609848    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.609858    1643 addons.go:69] Setting volcano=true in profile "addons-190000"
	I0917 01:38:50.609868    1643 addons.go:234] Setting addon volcano=true in "addons-190000"
	I0917 01:38:50.609872    1643 addons.go:69] Setting inspektor-gadget=true in profile "addons-190000"
	I0917 01:38:50.609882    1643 addons.go:234] Setting addon inspektor-gadget=true in "addons-190000"
	I0917 01:38:50.609897    1643 host.go:66] Checking if "addons-190000" exists ...
	I0917 01:38:50.609940    1643 addons.go:69] Setting metrics-server=true in profile "addons-190000"
	I0917 01:38:50.609976    1643 host.go:66] Checking if "addons-190000" exists ...
	I0917 01:38:50.609990    1643 addons.go:234] Setting addon metrics-server=true in "addons-190000"
	I0917 01:38:50.610016    1643 host.go:66] Checking if "addons-190000" exists ...
	I0917 01:38:50.611453    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.611687    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.611860    1643 host.go:66] Checking if "addons-190000" exists ...
	I0917 01:38:50.611937    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.611936    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.611973    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.612096    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.612092    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.612203    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.612212    1643 addons.go:69] Setting volumesnapshots=true in profile "addons-190000"
	I0917 01:38:50.612274    1643 addons.go:234] Setting addon volumesnapshots=true in "addons-190000"
	I0917 01:38:50.612274    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.612341    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.612365    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.612530    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.614229    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.614483    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.614593    1643 host.go:66] Checking if "addons-190000" exists ...
	I0917 01:38:50.614617    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.614842    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.615094    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.618055    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.618412    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.619147    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.625095    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49671
	I0917 01:38:50.629444    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49673
	I0917 01:38:50.629444    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.630986    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49674
	I0917 01:38:50.631975    1643 out.go:177] * Verifying Kubernetes components...
	I0917 01:38:50.632781    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49678
	I0917 01:38:50.632377    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49677
	I0917 01:38:50.632847    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.632413    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.632604    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.633105    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.637562    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.637577    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49681
	I0917 01:38:50.637610    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.637624    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49680
	I0917 01:38:50.637679    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.637681    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.637713    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.637716    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.638893    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.639007    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.639144    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.639206    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.639256    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.639428    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:50.639474    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49683
	I0917 01:38:50.644388    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:50.644097    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.644729    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:50.644834    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49687
	I0917 01:38:50.644830    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.645542    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.646132    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.646476    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49688
	I0917 01:38:50.646758    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.646664    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.646874    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.646991    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.647196    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.647331    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.650447    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49690
	I0917 01:38:50.650556    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49691
	I0917 01:38:50.650695    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.650704    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.650908    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.650963    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.650961    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.651023    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.651870    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.652443    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.652455    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:50.652503    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.652999    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.653872    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.654784    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.654855    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.654884    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.655026    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:50.655068    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:50.654892    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.654962    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.655149    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.655291    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.655306    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49695
	I0917 01:38:50.655653    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.655778    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.655870    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.656025    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.656042    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.656065    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.659506    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.660384    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.660370    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.661346    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.661341    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.661393    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.661607    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.661629    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.661654    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49696
	I0917 01:38:50.661791    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:50.663531    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.663547    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.663801    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:50.663833    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:50.663850    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49697
	I0917 01:38:50.663994    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.664182    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.664347    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.664397    1643 addons.go:234] Setting addon default-storageclass=true in "addons-190000"
	I0917 01:38:50.664458    1643 host.go:66] Checking if "addons-190000" exists ...
	I0917 01:38:50.667963    1643 host.go:66] Checking if "addons-190000" exists ...
	I0917 01:38:50.668284    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.668311    1643 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-190000"
	I0917 01:38:50.668471    1643 host.go:66] Checking if "addons-190000" exists ...
	I0917 01:38:50.668480    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.668506    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49701
	I0917 01:38:50.668633    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.668772    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.668743    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.668617    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.669059    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.671919    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.672291    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.672295    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.672146    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.672421    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.672496    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.672527    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49703
	I0917 01:38:50.672672    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.672752    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.672763    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.672771    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.672831    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.672867    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.672956    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.676381    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.676495    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.676447    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49705
	I0917 01:38:50.676734    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.676713    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.676809    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.679955    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.680130    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.682049    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.682079    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.682167    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.682695    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49707
	I0917 01:38:50.683397    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.683986    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.683996    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49708
	I0917 01:38:50.684158    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.684142    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.684215    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.684232    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.684449    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.686832    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.687458    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.687638    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.687654    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:50.687729    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.687766    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49711
	I0917 01:38:50.687965    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:50.688102    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:50.688104    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.691468    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.692658    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:50.693218    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.693228    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.693260    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.706701    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.706718    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.693286    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49713
	I0917 01:38:50.706817    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49714
	I0917 01:38:50.693034    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:50.706995    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49718
	I0917 01:38:50.708346    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:50.708501    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:50.708727    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49715
	I0917 01:38:50.708451    1643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:38:50.708985    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49717
	I0917 01:38:50.709024    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.709206    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.709230    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.709181    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49716
	I0917 01:38:50.709357    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:50.709576    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.709607    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:50.714227    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.714244    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.714303    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.714302    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:50.714324    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.714440    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:50.714506    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.716645    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.716764    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:50.716176    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:50.716880    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:50.716884    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:50.716725    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.716918    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.716957    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.717052    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:50.730744    1643 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0917 01:38:50.719799    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.720036    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.720239    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:50.720880    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:50.720881    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.720911    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.720971    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:50.724181    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.724379    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.726462    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49725
	I0917 01:38:50.726702    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49726
	I0917 01:38:50.729524    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49727
	I0917 01:38:50.731124    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.731214    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:50.731224    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.731236    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.731421    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:50.731722    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49728
	I0917 01:38:50.733721    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49729
	I0917 01:38:50.748258    1643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 01:38:50.751846    1643 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0917 01:38:50.752031    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.752038    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.752426    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.752462    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.752477    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.810176    1643 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0917 01:38:50.810562    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:50.810770    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.810803    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:50.810848    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.810234    1643 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0917 01:38:50.830937    1643 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0917 01:38:50.810954    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.810959    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.810970    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.811027    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.811109    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.811157    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.811176    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:50.811205    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.811206    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.811523    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.811894    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:50.831270    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.831291    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.831302    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.831294    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.831074    1643 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 01:38:50.868316    1643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0917 01:38:50.831497    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:50.831528    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:50.868361    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:50.833029    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:50.833198    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:50.833224    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.833268    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.833415    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.833535    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:50.905006    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.833618    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:50.905037    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:50.841462    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49735
	I0917 01:38:50.868121    1643 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0917 01:38:50.905087    1643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0917 01:38:50.868420    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:50.868615    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:50.905108    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:50.868628    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:50.905136    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:50.868669    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.868669    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:50.868708    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:50.905197    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:50.868740    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:50.869817    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:50.869815    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:50.904749    1643 out.go:177]   - Using image docker.io/registry:2.8.3
	I0917 01:38:50.905313    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:50.905316    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:50.904829    1643 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 01:38:50.905509    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:50.905528    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:50.905527    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.947078    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:50.947078    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:50.905589    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:50.905757    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:50.906748    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:50.906753    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:51.021117    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:50.913365    1643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 01:38:50.925914    1643 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0917 01:38:50.926168    1643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0917 01:38:50.926288    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:50.926307    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:50.946884    1643 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 01:38:50.905538    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:50.947272    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:50.983921    1643 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0917 01:38:50.984556    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:51.020916    1643 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0917 01:38:51.030199    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49738
	I0917 01:38:51.041962    1643 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0917 01:38:51.042179    1643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0917 01:38:51.042261    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:51.042284    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:51.042361    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:51.042389    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:51.042398    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:51.042422    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:51.043483    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:51.064724    1643 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0917 01:38:51.065185    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:51.065434    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:51.065665    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:51.065678    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:51.065866    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:51.065855    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:51.066116    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:51.066131    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:51.067437    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:51.086121    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:51.086306    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:51.086327    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:51.107044    1643 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0917 01:38:51.107049    1643 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0917 01:38:51.144372    1643 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0917 01:38:51.107065    1643 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0917 01:38:51.107157    1643 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 01:38:51.107489    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:51.107654    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:51.114638    1643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 01:38:51.129357    1643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0917 01:38:51.143962    1643 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0917 01:38:51.107061    1643 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 01:38:51.144557    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:51.144591    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:51.180990    1643 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0917 01:38:51.181423    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:51.202018    1643 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0917 01:38:51.202339    1643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 01:38:51.202661    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:51.204584    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:51.239561    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:51.239614    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:51.239622    1643 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 01:38:51.239718    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:51.239476    1643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0917 01:38:51.239911    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:51.281173    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:51.239958    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:51.239972    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:51.239975    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:51.240085    1643 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 01:38:51.259829    1643 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0917 01:38:51.281311    1643 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 01:38:51.281339    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:51.281344    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:51.281317    1643 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0917 01:38:51.281368    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:51.281360    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:51.280865    1643 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0917 01:38:51.280865    1643 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 01:38:51.260206    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:51.281448    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:51.281471    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:51.281543    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:51.281550    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:51.281552    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:51.281609    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:51.337910    1643 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 01:38:51.338204    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:51.338227    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:51.338243    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:51.338227    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:51.338249    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:51.338249    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:51.338271    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:51.359080    1643 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0917 01:38:51.396022    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:51.359285    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:51.359326    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:51.359332    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:51.359327    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:51.396111    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:51.359338    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:51.359366    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:51.360410    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:51.453801    1643 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0917 01:38:51.396221    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:51.396237    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:51.396267    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:51.396356    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:51.432928    1643 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0917 01:38:51.433007    1643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 01:38:51.447688    1643 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0917 01:38:51.453978    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:51.454278    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:51.490664    1643 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 01:38:51.528252    1643 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0917 01:38:51.528378    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:51.528506    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:51.546691    1643 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0917 01:38:51.546703    1643 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0917 01:38:51.564725    1643 out.go:177]   - Using image docker.io/busybox:stable
	I0917 01:38:51.599026    1643 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0917 01:38:51.602056    1643 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0917 01:38:51.601917    1643 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 01:38:51.639136    1643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0917 01:38:51.609069    1643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 01:38:51.616397    1643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 01:38:51.634327    1643 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0917 01:38:51.639227    1643 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0917 01:38:51.636761    1643 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0917 01:38:51.639262    1643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0917 01:38:51.638793    1643 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0917 01:38:51.639155    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:51.639432    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:51.647412    1643 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0917 01:38:51.648036    1643 node_ready.go:35] waiting up to 6m0s for node "addons-190000" to be "Ready" ...
	I0917 01:38:51.676152    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:51.693304    1643 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0917 01:38:51.705455    1643 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 01:38:51.713029    1643 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0917 01:38:51.713043    1643 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0917 01:38:51.712367    1643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0917 01:38:51.712630    1643 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0917 01:38:51.713161    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:51.725760    1643 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0917 01:38:51.734417    1643 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0917 01:38:51.734565    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:51.734656    1643 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 01:38:51.734666    1643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0917 01:38:51.734679    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:51.734799    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:51.734921    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:51.735027    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:51.735127    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:51.753055    1643 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 01:38:51.753068    1643 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0917 01:38:51.754848    1643 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0917 01:38:51.807502    1643 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0917 01:38:51.807516    1643 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0917 01:38:51.812947    1643 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0917 01:38:51.813024    1643 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 01:38:51.813037    1643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0917 01:38:51.813059    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:51.813211    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:51.813357    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:51.813466    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:51.813566    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:51.842601    1643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 01:38:51.842969    1643 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 01:38:51.842980    1643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0917 01:38:51.844013    1643 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 01:38:51.844022    1643 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0917 01:38:51.856583    1643 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0917 01:38:51.860340    1643 node_ready.go:49] node "addons-190000" has status "Ready":"True"
	I0917 01:38:51.860353    1643 node_ready.go:38] duration metric: took 184.289089ms for node "addons-190000" to be "Ready" ...
	I0917 01:38:51.860358    1643 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 01:38:51.880052    1643 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0917 01:38:51.880065    1643 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0917 01:38:51.880563    1643 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-f8xkv" in "kube-system" namespace to be "Ready" ...
	I0917 01:38:51.914838    1643 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0917 01:38:51.936036    1643 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 01:38:51.936055    1643 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0917 01:38:51.936072    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:51.936225    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:51.936340    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:51.936462    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:51.936568    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:52.001664    1643 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0917 01:38:52.001684    1643 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0917 01:38:52.104165    1643 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 01:38:52.104180    1643 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0917 01:38:52.166443    1643 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0917 01:38:52.166458    1643 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0917 01:38:52.192800    1643 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-190000" context rescaled to 1 replicas
	I0917 01:38:52.220089    1643 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 01:38:52.220110    1643 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 01:38:52.262707    1643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 01:38:52.326879    1643 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0917 01:38:52.326891    1643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0917 01:38:52.366071    1643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 01:38:52.503087    1643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.263397038s)
	I0917 01:38:52.503119    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:38:52.503127    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:38:52.503284    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:38:52.503287    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:38:52.503298    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:38:52.503306    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:38:52.503310    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:38:52.503446    1643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.263779682s)
	I0917 01:38:52.503461    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:38:52.503460    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:38:52.503466    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:38:52.503469    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:38:52.503475    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:38:52.503625    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:38:52.503646    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:38:52.503654    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:38:52.503670    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:38:52.503679    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:38:52.503833    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:38:52.503853    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:38:52.503869    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:38:52.535199    1643 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 01:38:52.535218    1643 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0917 01:38:52.592993    1643 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0917 01:38:52.593006    1643 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0917 01:38:52.641980    1643 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 01:38:52.642004    1643 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 01:38:52.648852    1643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0917 01:38:52.670543    1643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 01:38:52.703120    1643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.174894093s)
	I0917 01:38:52.703154    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:38:52.703165    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:38:52.703325    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:38:52.703327    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:38:52.703337    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:38:52.703345    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:38:52.703352    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:38:52.703503    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:38:52.703510    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:38:52.703521    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:38:52.899881    1643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 01:38:52.955836    1643 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 01:38:52.955849    1643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0917 01:38:52.977990    1643 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 01:38:52.978004    1643 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0917 01:38:53.167825    1643 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 01:38:53.167838    1643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0917 01:38:53.333243    1643 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 01:38:53.333256    1643 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0917 01:38:53.378692    1643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 01:38:53.584401    1643 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 01:38:53.584415    1643 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0917 01:38:53.595178    1643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 01:38:53.709043    1643 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 01:38:53.709058    1643 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0917 01:38:53.836507    1643 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 01:38:53.836521    1643 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0917 01:38:53.884650    1643 pod_ready.go:103] pod "coredns-7c65d6cfc9-f8xkv" in "kube-system" namespace has status "Ready":"False"
	I0917 01:38:54.002906    1643 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 01:38:54.002918    1643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0917 01:38:54.174714    1643 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 01:38:54.174728    1643 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0917 01:38:54.460095    1643 pod_ready.go:93] pod "coredns-7c65d6cfc9-f8xkv" in "kube-system" namespace has status "Ready":"True"
	I0917 01:38:54.460110    1643 pod_ready.go:82] duration metric: took 2.57950421s for pod "coredns-7c65d6cfc9-f8xkv" in "kube-system" namespace to be "Ready" ...
	I0917 01:38:54.460117    1643 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j9dr7" in "kube-system" namespace to be "Ready" ...
	I0917 01:38:54.483534    1643 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-j9dr7" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-j9dr7" not found
	I0917 01:38:54.483548    1643 pod_ready.go:82] duration metric: took 23.426142ms for pod "coredns-7c65d6cfc9-j9dr7" in "kube-system" namespace to be "Ready" ...
	E0917 01:38:54.483557    1643 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-j9dr7" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-j9dr7" not found
	I0917 01:38:54.483566    1643 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-190000" in "kube-system" namespace to be "Ready" ...
	I0917 01:38:54.490323    1643 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 01:38:54.490336    1643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0917 01:38:54.814655    1643 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 01:38:54.814670    1643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0917 01:38:55.183512    1643 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 01:38:55.183528    1643 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0917 01:38:55.340191    1643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 01:38:55.501890    1643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.862630453s)
	I0917 01:38:55.501917    1643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.862692561s)
	I0917 01:38:55.501920    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:38:55.501930    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:38:55.501934    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:38:55.501936    1643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.788814728s)
	I0917 01:38:55.501941    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:38:55.501954    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:38:55.502001    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:38:55.502031    1643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (3.659353341s)
	I0917 01:38:55.502050    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:38:55.502060    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:38:55.502140    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:38:55.502140    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:38:55.502153    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:38:55.502159    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:38:55.502165    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:38:55.502158    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:38:55.502198    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:38:55.502207    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:38:55.502213    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:38:55.502215    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:38:55.502239    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:38:55.502261    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:38:55.502267    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:38:55.502278    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:38:55.502218    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:38:55.502325    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:38:55.502246    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:38:55.502358    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:38:55.502284    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:38:55.502362    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:38:55.502374    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:38:55.502384    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:38:55.502545    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:38:55.502567    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:38:55.502568    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:38:55.502588    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:38:55.502595    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:38:55.502774    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:38:55.502814    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:38:55.502822    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:38:55.502828    1643 addons.go:475] Verifying addon registry=true in "addons-190000"
	I0917 01:38:55.525607    1643 out.go:177] * Verifying registry addon...
	I0917 01:38:55.584589    1643 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0917 01:38:55.589277    1643 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 01:38:55.589288    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:55.591326    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:38:55.591336    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:38:55.591493    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:38:55.591504    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:38:55.591514    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:38:56.091075    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:56.555328    1643 pod_ready.go:103] pod "etcd-addons-190000" in "kube-system" namespace has status "Ready":"False"
	I0917 01:38:56.603023    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:57.111100    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:57.771145    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:57.890623    1643 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0917 01:38:57.890645    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:57.890856    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:57.890962    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:57.891071    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:57.891155    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:58.089027    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:58.141325    1643 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0917 01:38:58.199412    1643 addons.go:234] Setting addon gcp-auth=true in "addons-190000"
	I0917 01:38:58.199440    1643 host.go:66] Checking if "addons-190000" exists ...
	I0917 01:38:58.199726    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:58.199747    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:58.208881    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49756
	I0917 01:38:58.209252    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:58.209586    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:58.209599    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:58.209838    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:58.210222    1643 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:38:58.210240    1643 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:38:58.221094    1643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49758
	I0917 01:38:58.221575    1643 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:38:58.221945    1643 main.go:141] libmachine: Using API Version  1
	I0917 01:38:58.221963    1643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:38:58.222201    1643 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:38:58.222342    1643 main.go:141] libmachine: (addons-190000) Calling .GetState
	I0917 01:38:58.222436    1643 main.go:141] libmachine: (addons-190000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 01:38:58.222518    1643 main.go:141] libmachine: (addons-190000) DBG | hyperkit pid from json: 1658
	I0917 01:38:58.223566    1643 main.go:141] libmachine: (addons-190000) Calling .DriverName
	I0917 01:38:58.223741    1643 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0917 01:38:58.223753    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHHostname
	I0917 01:38:58.223839    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHPort
	I0917 01:38:58.223929    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHKeyPath
	I0917 01:38:58.224041    1643 main.go:141] libmachine: (addons-190000) Calling .GetSSHUsername
	I0917 01:38:58.224123    1643 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/addons-190000/id_rsa Username:docker}
	I0917 01:38:58.587309    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:58.999669    1643 pod_ready.go:103] pod "etcd-addons-190000" in "kube-system" namespace has status "Ready":"False"
	I0917 01:38:59.101175    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:59.607236    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:00.022361    1643 pod_ready.go:93] pod "etcd-addons-190000" in "kube-system" namespace has status "Ready":"True"
	I0917 01:39:00.022375    1643 pod_ready.go:82] duration metric: took 5.538738869s for pod "etcd-addons-190000" in "kube-system" namespace to be "Ready" ...
	I0917 01:39:00.022384    1643 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-190000" in "kube-system" namespace to be "Ready" ...
	I0917 01:39:00.086728    1643 pod_ready.go:93] pod "kube-apiserver-addons-190000" in "kube-system" namespace has status "Ready":"True"
	I0917 01:39:00.086740    1643 pod_ready.go:82] duration metric: took 64.344763ms for pod "kube-apiserver-addons-190000" in "kube-system" namespace to be "Ready" ...
	I0917 01:39:00.086748    1643 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-190000" in "kube-system" namespace to be "Ready" ...
	I0917 01:39:00.123387    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:00.137273    1643 pod_ready.go:93] pod "kube-controller-manager-addons-190000" in "kube-system" namespace has status "Ready":"True"
	I0917 01:39:00.137286    1643 pod_ready.go:82] duration metric: took 50.532165ms for pod "kube-controller-manager-addons-190000" in "kube-system" namespace to be "Ready" ...
	I0917 01:39:00.137294    1643 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-q9nxs" in "kube-system" namespace to be "Ready" ...
	I0917 01:39:00.252345    1643 pod_ready.go:93] pod "kube-proxy-q9nxs" in "kube-system" namespace has status "Ready":"True"
	I0917 01:39:00.252358    1643 pod_ready.go:82] duration metric: took 115.058282ms for pod "kube-proxy-q9nxs" in "kube-system" namespace to be "Ready" ...
	I0917 01:39:00.252365    1643 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-190000" in "kube-system" namespace to be "Ready" ...
	I0917 01:39:00.311933    1643 pod_ready.go:93] pod "kube-scheduler-addons-190000" in "kube-system" namespace has status "Ready":"True"
	I0917 01:39:00.311948    1643 pod_ready.go:82] duration metric: took 59.577618ms for pod "kube-scheduler-addons-190000" in "kube-system" namespace to be "Ready" ...
	I0917 01:39:00.311954    1643 pod_ready.go:39] duration metric: took 8.451491357s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 01:39:00.311974    1643 api_server.go:52] waiting for apiserver process to appear ...
	I0917 01:39:00.312043    1643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 01:39:00.617008    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:00.922421    1643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.556230389s)
	I0917 01:39:00.922465    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:00.922481    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:00.922484    1643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.65965789s)
	I0917 01:39:00.922486    1643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.273518672s)
	I0917 01:39:00.922505    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:00.922512    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:00.922515    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:00.922527    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:00.922610    1643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.251938102s)
	I0917 01:39:00.922639    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:00.922653    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:00.922693    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:39:00.922697    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:39:00.922706    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:00.922716    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:00.922708    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:00.922723    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:00.922726    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:39:00.922755    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:39:00.922728    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:00.922784    1643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.022774644s)
	I0917 01:39:00.922784    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:39:00.922809    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:00.922810    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:00.922822    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:00.922834    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:00.922839    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:00.922888    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:00.922979    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:39:00.922977    1643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.544175394s)
	I0917 01:39:00.922999    1643 main.go:141] libmachine: Successfully made call to close driver server
	W0917 01:39:00.923022    1643 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 01:39:00.923046    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:00.923053    1643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.327772483s)
	I0917 01:39:00.923057    1643 retry.go:31] will retry after 372.77299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 01:39:00.923072    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:00.923081    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:00.923102    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:39:00.923057    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:00.923117    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:00.923121    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:00.923172    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:39:00.923191    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:39:00.923197    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:00.923205    1643 addons.go:475] Verifying addon ingress=true in "addons-190000"
	I0917 01:39:00.923207    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:39:00.923230    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:39:00.923237    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:00.923244    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:00.923427    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:39:00.923520    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:39:00.923529    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:00.923565    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:39:00.923588    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:00.923596    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:00.923605    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:00.924080    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:39:00.924125    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:00.923251    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:00.924151    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:39:00.924065    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:39:00.924175    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:00.924077    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:39:00.924314    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:39:00.924319    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:39:00.924330    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:00.924345    1643 addons.go:475] Verifying addon metrics-server=true in "addons-190000"
	I0917 01:39:00.966788    1643 out.go:177] * Verifying ingress addon...
	I0917 01:39:00.988699    1643 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-190000 service yakd-dashboard -n yakd-dashboard
	
	I0917 01:39:01.047706    1643 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0917 01:39:01.101784    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:01.101797    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:01.101952    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:39:01.101961    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:01.101987    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:39:01.110448    1643 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 01:39:01.110460    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:01.111119    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:01.296100    1643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 01:39:01.584374    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:01.630723    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:01.792828    1643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.452531078s)
	I0917 01:39:01.792859    1643 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.569061244s)
	I0917 01:39:01.792858    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:01.792891    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:01.792891    1643 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.480822859s)
	I0917 01:39:01.792905    1643 api_server.go:72] duration metric: took 11.18367808s to wait for apiserver process to appear ...
	I0917 01:39:01.792968    1643 api_server.go:88] waiting for apiserver healthz status ...
	I0917 01:39:01.792996    1643 api_server.go:253] Checking apiserver healthz at https://192.169.0.2:8443/healthz ...
	I0917 01:39:01.793060    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:39:01.793072    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:01.793079    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:01.793101    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:01.793078    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:39:01.793235    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:39:01.793253    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:01.793253    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:39:01.793263    1643 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-190000"
	I0917 01:39:01.801274    1643 api_server.go:279] https://192.169.0.2:8443/healthz returned 200:
	ok
	I0917 01:39:01.804654    1643 api_server.go:141] control plane version: v1.31.1
	I0917 01:39:01.804670    1643 api_server.go:131] duration metric: took 11.694024ms to wait for apiserver health ...
	I0917 01:39:01.804676    1643 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 01:39:01.817203    1643 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0917 01:39:01.836650    1643 out.go:177] * Verifying csi-hostpath-driver addon...
	I0917 01:39:01.932335    1643 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0917 01:39:01.968738    1643 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 01:39:02.006763    1643 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 01:39:02.006782    1643 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0917 01:39:02.017669    1643 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 01:39:02.017681    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:02.019307    1643 system_pods.go:59] 18 kube-system pods found
	I0917 01:39:02.019329    1643 system_pods.go:61] "coredns-7c65d6cfc9-f8xkv" [53fa24db-8700-407b-a2af-3a35ae32e683] Running
	I0917 01:39:02.019338    1643 system_pods.go:61] "csi-hostpath-attacher-0" [da510ec0-5624-4eba-b543-c46eff3e25d0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 01:39:02.019345    1643 system_pods.go:61] "csi-hostpath-resizer-0" [82545911-881d-45c3-9a34-fe32c11ecdea] Pending
	I0917 01:39:02.019360    1643 system_pods.go:61] "csi-hostpathplugin-fc75w" [1dbeb00b-6a95-441c-8b52-8c533e0400ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 01:39:02.019365    1643 system_pods.go:61] "etcd-addons-190000" [a1f6d6e3-1de7-460a-8b0f-a46ecd194b3d] Running
	I0917 01:39:02.019368    1643 system_pods.go:61] "kube-apiserver-addons-190000" [673c8d6a-f14b-4e2e-ae8b-cea5b708da0a] Running
	I0917 01:39:02.019372    1643 system_pods.go:61] "kube-controller-manager-addons-190000" [73f425f4-6be5-4b17-b3af-5ef8f018a232] Running
	I0917 01:39:02.019378    1643 system_pods.go:61] "kube-ingress-dns-minikube" [24515e70-92da-482b-ab57-f81f6564ca96] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0917 01:39:02.019381    1643 system_pods.go:61] "kube-proxy-q9nxs" [c9ea3a2a-4285-4b6e-ba8a-a3d105fe8e3d] Running
	I0917 01:39:02.019384    1643 system_pods.go:61] "kube-scheduler-addons-190000" [32ebaef8-38cd-4356-9c95-aebd5ebb850c] Running
	I0917 01:39:02.019390    1643 system_pods.go:61] "metrics-server-84c5f94fbc-zzkrt" [ce680a4d-91c5-4b38-9c62-9e832f1427c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 01:39:02.019397    1643 system_pods.go:61] "nvidia-device-plugin-daemonset-nzgpj" [76589ce8-f1e9-4d47-98e3-18f0b6b25a2d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0917 01:39:02.019402    1643 system_pods.go:61] "registry-66c9cd494c-jc9c9" [5ebe9b61-99d8-42d6-9925-57fe4224f525] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0917 01:39:02.019411    1643 system_pods.go:61] "registry-proxy-wx6wt" [2d8d4a63-2d55-49da-9763-4fb31b7dc6c9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0917 01:39:02.019415    1643 system_pods.go:61] "snapshot-controller-56fcc65765-x99dl" [950d585a-5802-47a5-b3ce-b59f2df8fe13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 01:39:02.019421    1643 system_pods.go:61] "snapshot-controller-56fcc65765-ztc62" [dbb41427-2cea-477b-9756-726fc59b4039] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 01:39:02.019425    1643 system_pods.go:61] "storage-provisioner" [e6db3cee-79c3-4523-8da0-52c51e9729d9] Running
	I0917 01:39:02.019429    1643 system_pods.go:61] "tiller-deploy-b48cc5f79-x9frd" [ed4417f5-1c3f-4365-8f29-a33369722c59] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0917 01:39:02.019443    1643 system_pods.go:74] duration metric: took 214.759031ms to wait for pod list to return data ...
	I0917 01:39:02.019451    1643 default_sa.go:34] waiting for default service account to be created ...
	I0917 01:39:02.041094    1643 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 01:39:02.041107    1643 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0917 01:39:02.060729    1643 default_sa.go:45] found service account: "default"
	I0917 01:39:02.060743    1643 default_sa.go:55] duration metric: took 41.286947ms for default service account to be created ...
	I0917 01:39:02.060749    1643 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 01:39:02.115158    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:02.115336    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:02.117831    1643 system_pods.go:86] 18 kube-system pods found
	I0917 01:39:02.117844    1643 system_pods.go:89] "coredns-7c65d6cfc9-f8xkv" [53fa24db-8700-407b-a2af-3a35ae32e683] Running
	I0917 01:39:02.117850    1643 system_pods.go:89] "csi-hostpath-attacher-0" [da510ec0-5624-4eba-b543-c46eff3e25d0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 01:39:02.117858    1643 system_pods.go:89] "csi-hostpath-resizer-0" [82545911-881d-45c3-9a34-fe32c11ecdea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 01:39:02.117869    1643 system_pods.go:89] "csi-hostpathplugin-fc75w" [1dbeb00b-6a95-441c-8b52-8c533e0400ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 01:39:02.117874    1643 system_pods.go:89] "etcd-addons-190000" [a1f6d6e3-1de7-460a-8b0f-a46ecd194b3d] Running
	I0917 01:39:02.117877    1643 system_pods.go:89] "kube-apiserver-addons-190000" [673c8d6a-f14b-4e2e-ae8b-cea5b708da0a] Running
	I0917 01:39:02.117881    1643 system_pods.go:89] "kube-controller-manager-addons-190000" [73f425f4-6be5-4b17-b3af-5ef8f018a232] Running
	I0917 01:39:02.117885    1643 system_pods.go:89] "kube-ingress-dns-minikube" [24515e70-92da-482b-ab57-f81f6564ca96] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0917 01:39:02.117888    1643 system_pods.go:89] "kube-proxy-q9nxs" [c9ea3a2a-4285-4b6e-ba8a-a3d105fe8e3d] Running
	I0917 01:39:02.117891    1643 system_pods.go:89] "kube-scheduler-addons-190000" [32ebaef8-38cd-4356-9c95-aebd5ebb850c] Running
	I0917 01:39:02.117896    1643 system_pods.go:89] "metrics-server-84c5f94fbc-zzkrt" [ce680a4d-91c5-4b38-9c62-9e832f1427c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 01:39:02.117901    1643 system_pods.go:89] "nvidia-device-plugin-daemonset-nzgpj" [76589ce8-f1e9-4d47-98e3-18f0b6b25a2d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0917 01:39:02.117905    1643 system_pods.go:89] "registry-66c9cd494c-jc9c9" [5ebe9b61-99d8-42d6-9925-57fe4224f525] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0917 01:39:02.117910    1643 system_pods.go:89] "registry-proxy-wx6wt" [2d8d4a63-2d55-49da-9763-4fb31b7dc6c9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0917 01:39:02.117914    1643 system_pods.go:89] "snapshot-controller-56fcc65765-x99dl" [950d585a-5802-47a5-b3ce-b59f2df8fe13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 01:39:02.117921    1643 system_pods.go:89] "snapshot-controller-56fcc65765-ztc62" [dbb41427-2cea-477b-9756-726fc59b4039] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 01:39:02.117925    1643 system_pods.go:89] "storage-provisioner" [e6db3cee-79c3-4523-8da0-52c51e9729d9] Running
	I0917 01:39:02.117928    1643 system_pods.go:89] "tiller-deploy-b48cc5f79-x9frd" [ed4417f5-1c3f-4365-8f29-a33369722c59] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0917 01:39:02.117935    1643 system_pods.go:126] duration metric: took 57.18108ms to wait for k8s-apps to be running ...
	I0917 01:39:02.117946    1643 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 01:39:02.117997    1643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 01:39:02.181509    1643 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 01:39:02.181522    1643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0917 01:39:02.341125    1643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 01:39:02.436481    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:02.552299    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:02.589513    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:02.895023    1643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.598870224s)
	I0917 01:39:02.895044    1643 system_svc.go:56] duration metric: took 777.087456ms WaitForService to wait for kubelet
	I0917 01:39:02.895051    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:02.895060    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:02.895057    1643 kubeadm.go:582] duration metric: took 12.285816876s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 01:39:02.895072    1643 node_conditions.go:102] verifying NodePressure condition ...
	I0917 01:39:02.895250    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:39:02.895253    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:39:02.895262    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:02.895269    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:02.895280    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:02.895419    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:39:02.895444    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:39:02.895452    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:02.899848    1643 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 01:39:02.899864    1643 node_conditions.go:123] node cpu capacity is 2
	I0917 01:39:02.899878    1643 node_conditions.go:105] duration metric: took 4.800392ms to run NodePressure ...
	I0917 01:39:02.899889    1643 start.go:241] waiting for startup goroutines ...
	I0917 01:39:02.936545    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:03.051414    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:03.093791    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:03.191213    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:03.191227    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:03.191387    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:39:03.191397    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:03.191413    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:39:03.191417    1643 main.go:141] libmachine: Making call to close driver server
	I0917 01:39:03.191427    1643 main.go:141] libmachine: (addons-190000) Calling .Close
	I0917 01:39:03.191552    1643 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:39:03.191563    1643 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:39:03.191572    1643 main.go:141] libmachine: (addons-190000) DBG | Closing plugin on server side
	I0917 01:39:03.192424    1643 addons.go:475] Verifying addon gcp-auth=true in "addons-190000"
	I0917 01:39:03.216119    1643 out.go:177] * Verifying gcp-auth addon...
	I0917 01:39:03.289231    1643 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0917 01:39:03.292318    1643 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 01:39:03.436353    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:03.551371    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:03.587084    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:03.935505    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:04.051152    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:04.087234    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:04.435708    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:04.551578    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:04.587134    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:04.935955    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:05.051900    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:05.086836    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:05.435742    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:05.551245    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:05.587182    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:05.935209    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:06.050288    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:06.086992    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:06.436379    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:06.551697    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:06.651669    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:06.935096    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:07.052214    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:07.088107    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:07.435466    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:07.552377    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:07.587311    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:07.936538    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:08.050287    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:08.087086    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:08.435881    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:08.550683    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:08.587478    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:08.936267    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:09.051142    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:09.086918    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:09.435437    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:09.551619    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:09.587638    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:09.935339    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:10.052049    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:10.087874    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:10.435247    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:10.601534    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:10.604007    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:10.936715    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:11.051359    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:11.088108    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:11.436350    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:11.551550    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:11.587240    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:11.935939    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:12.050874    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:12.087842    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:12.437489    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:12.551345    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:12.587398    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:12.935324    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:13.050669    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:13.087272    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:13.436559    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:13.551358    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:13.589565    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:13.935945    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:14.099169    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:14.099417    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:14.435954    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:14.551899    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:14.588400    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:14.936132    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:15.051850    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:15.087549    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:15.436098    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:15.552165    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:15.587203    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:15.935362    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:16.051331    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:16.086944    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:16.437317    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:16.551599    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:16.587909    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:16.935548    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:17.051999    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:17.087773    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:17.435585    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:17.551596    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:17.588223    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:17.957217    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:18.051663    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:18.151715    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:18.437498    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:18.553446    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:18.589067    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:18.938704    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:19.051056    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:19.086964    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:19.436837    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:19.551477    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:19.591626    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:19.941907    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:20.051319    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:20.088751    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:20.435439    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:20.552349    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:20.588967    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:20.935589    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:21.051716    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:21.088248    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:21.435618    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:21.551632    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:21.588263    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:21.935570    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:22.051107    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:22.086994    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:22.436523    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:22.551343    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:22.587732    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:22.935593    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:23.051138    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:23.087248    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:23.436134    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:23.551062    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:23.587832    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:23.935721    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:24.051480    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:24.087951    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:24.435688    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:24.551540    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:24.587673    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:24.936639    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:25.050845    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:25.088952    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:25.435827    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:25.551364    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:25.588508    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:25.936682    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:26.053144    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:26.089247    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:39:26.436914    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:26.553667    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:26.588139    1643 kapi.go:107] duration metric: took 31.003188353s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 01:39:26.935685    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:27.051621    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:27.435493    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:27.552384    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:27.935713    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:28.051403    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:28.435913    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:28.551045    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:28.937297    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:29.053549    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:29.439406    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:29.555480    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:29.935537    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:30.051049    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:30.436968    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:30.551146    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:30.937025    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:31.051466    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:31.436902    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:31.551715    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:31.936053    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:32.051386    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:32.436256    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:32.551268    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:32.936800    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:33.050649    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:33.437245    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:33.551943    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:33.935848    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:34.051542    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:34.437387    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:34.552375    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:34.935635    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:35.051281    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:35.436502    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:35.552000    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:35.936328    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:36.052310    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:36.436590    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:36.551919    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:36.935530    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:37.051417    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:37.436016    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:37.550916    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:37.936793    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:38.052220    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:38.436363    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:38.551158    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:38.936592    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:39.051047    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:39.436235    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:39.551514    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:39.935783    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:40.051696    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:40.437975    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:40.551717    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:40.936019    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:41.051410    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:41.436054    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:41.552409    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:41.940145    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:42.053636    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:42.437297    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:42.551748    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:42.936331    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:43.051913    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:43.437382    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:43.553586    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:43.936052    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:44.051205    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:44.437285    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:44.551584    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:44.936587    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:45.050872    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:45.436970    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:45.550875    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:45.935456    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:46.051559    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:46.435845    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:46.552232    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:46.937672    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:47.052198    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:47.435920    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:47.551020    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:47.936335    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:48.051973    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:48.436087    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:48.551425    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:48.936190    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:49.052080    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:49.436158    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:49.551736    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:49.938048    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:50.053404    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:50.437470    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:50.551640    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:50.936036    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:51.051762    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:51.436787    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:51.550691    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:51.935988    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:52.050799    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:52.436220    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:52.551939    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:52.935704    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:53.120931    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:53.436341    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:53.551796    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:53.937727    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:54.052813    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:54.436274    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:54.551277    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:54.936070    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:55.052348    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:55.439371    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:55.551329    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:55.935967    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:56.051390    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:56.437496    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:56.553130    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:56.936523    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:57.051649    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:57.435669    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:57.551118    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:57.935766    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:58.051347    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:58.439061    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:58.551495    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:58.935828    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:59.051603    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:59.436955    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:59.550911    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:39:59.936014    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:00.051392    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:00.435539    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:00.551036    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:00.936119    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:01.053285    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:01.439560    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:01.552553    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:01.937127    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:02.051776    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:02.436346    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:02.551376    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:02.936964    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:03.052459    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:03.435751    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:03.553678    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:03.939377    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:04.054771    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:04.437621    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:04.554078    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:04.935854    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:05.051111    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:05.436064    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:05.551344    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:05.936075    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:06.051248    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:06.446641    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:06.551779    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:06.937902    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:07.051934    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:07.441135    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:07.551668    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:07.936256    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:08.052090    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:08.437386    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:08.551252    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:08.936016    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:09.051232    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:09.435727    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:09.550881    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:09.937573    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:10.051081    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:10.437075    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:10.551364    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:10.936696    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:11.052423    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:11.436536    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:11.551818    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:11.936032    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:12.051229    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:12.437334    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:12.551233    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:12.936323    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:13.051826    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:13.435975    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:13.551505    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:13.936999    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:14.052818    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:14.437292    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:14.550918    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:14.936203    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:15.051069    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:15.435951    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:15.551034    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:15.936548    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:16.052329    1643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:16.437436    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:16.553511    1643 kapi.go:107] duration metric: took 1m15.504927926s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0917 01:40:16.936714    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:17.436026    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:17.937924    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:18.436708    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:18.936131    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:19.436446    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:19.936443    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:20.437151    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:20.938962    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:21.437097    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:21.936024    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:22.438841    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:22.937342    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:23.436407    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:23.937304    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:24.436763    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:24.935793    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:25.436728    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:25.936766    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:26.292617    1643 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 01:40:26.292627    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:26.438641    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:26.793127    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:26.936753    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:27.292269    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:27.436311    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:27.794030    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:27.937851    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:28.292900    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:28.437271    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:28.793667    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:28.939470    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:29.293738    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:29.438133    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:40:29.794139    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:29.938053    1643 kapi.go:107] duration metric: took 1m28.004699424s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 01:40:30.293615    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:30.794410    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:31.294980    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:31.795431    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:32.295216    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:32.795098    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:33.294489    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:33.794722    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:34.293552    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:34.793406    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:35.296079    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:35.795345    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:36.294105    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:36.794316    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:37.293778    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:37.793962    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:38.294058    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:38.795487    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:39.294087    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:39.793313    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:40.293394    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:40.794227    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:41.295049    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:41.793778    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:42.294251    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:42.794989    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:43.294716    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:43.795007    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:44.294172    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:44.793843    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:45.294074    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:45.793744    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:46.294541    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:46.793601    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:47.293504    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:47.793968    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:48.295151    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:48.794824    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:49.294185    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:49.794445    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:50.292720    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:50.794136    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:51.294810    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:51.794875    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:52.293841    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:52.794660    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:53.294269    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:53.794130    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:54.294979    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:54.795043    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:55.295861    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:55.796215    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:56.293927    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:56.794940    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:57.294362    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:57.794281    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:58.295776    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:58.794411    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:59.295000    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:59.793714    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:00.294809    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:00.795150    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:01.295207    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:01.794454    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:02.295777    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:02.794815    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:03.294531    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:03.794361    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:04.294003    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:04.792901    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:05.294649    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:05.794674    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:06.294797    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:06.794881    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:07.295717    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:07.796048    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:08.294037    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:08.793460    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:09.294782    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:09.793689    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:10.293544    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:10.793828    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:11.295054    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:11.793156    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:12.294649    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:12.794752    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:13.294841    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:13.794746    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:14.293782    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:14.794434    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:15.294900    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:15.794288    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:16.295684    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:16.794425    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:17.293900    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:17.795008    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:18.294459    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:18.794883    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:19.295577    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:19.794361    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:20.294647    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:20.795571    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:21.295129    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:21.794740    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:22.294450    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:22.794486    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:23.295093    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:23.793897    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:24.294157    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:24.793896    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:25.293804    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:25.794905    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:26.294698    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:26.794466    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:27.294000    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:27.794356    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:28.294471    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:28.796503    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:29.294581    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:29.795946    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:30.296839    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:30.796115    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:31.294500    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:31.793848    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:32.294652    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:32.793940    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:33.293792    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:33.793142    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:34.295268    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:34.794409    1643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:35.295479    1643 kapi.go:107] duration metric: took 2m32.0044879s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 01:41:35.315252    1643 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-190000 cluster.
	I0917 01:41:35.336155    1643 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 01:41:35.358292    1643 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 01:41:35.379450    1643 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, helm-tiller, default-storageclass, volcano, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0917 01:41:35.400179    1643 addons.go:510] duration metric: took 2m44.789171338s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner helm-tiller default-storageclass volcano inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0917 01:41:35.400223    1643 start.go:246] waiting for cluster config update ...
	I0917 01:41:35.400255    1643 start.go:255] writing updated cluster config ...
	I0917 01:41:35.423550    1643 ssh_runner.go:195] Run: rm -f paused
	I0917 01:41:35.473151    1643 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0917 01:41:35.495478    1643 out.go:201] 
	W0917 01:41:35.516182    1643 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0917 01:41:35.537273    1643 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0917 01:41:35.632290    1643 out.go:177] * Done! kubectl is now configured to use "addons-190000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 17 08:51:31 addons-190000 dockerd[1281]: time="2024-09-17T08:51:31.039118897Z" level=warning msg="cleaning up after shim disconnected" id=8c9e7ef0242445a8a7bf6f50f5d3532db3ef6f2b70cafa73c536dd1f8427ba49 namespace=moby
	Sep 17 08:51:31 addons-190000 dockerd[1275]: time="2024-09-17T08:51:31.039152575Z" level=info msg="ignoring event" container=8c9e7ef0242445a8a7bf6f50f5d3532db3ef6f2b70cafa73c536dd1f8427ba49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:31 addons-190000 dockerd[1281]: time="2024-09-17T08:51:31.039164851Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 08:51:31 addons-190000 dockerd[1281]: time="2024-09-17T08:51:31.167333114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 08:51:31 addons-190000 dockerd[1281]: time="2024-09-17T08:51:31.167432740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 08:51:31 addons-190000 dockerd[1281]: time="2024-09-17T08:51:31.167682836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 08:51:31 addons-190000 dockerd[1281]: time="2024-09-17T08:51:31.167876215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 08:51:31 addons-190000 cri-dockerd[1175]: time="2024-09-17T08:51:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/71d07134962bfdd7f0eceb03d49233f6f42b69603daecc997f0c32cf73464f2b/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 17 08:51:31 addons-190000 dockerd[1275]: time="2024-09-17T08:51:31.432133233Z" level=info msg="ignoring event" container=ea9023c6171c183716a67ae945fb7620ccdf2a62172a6e7ae25b983ef91f15df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:31 addons-190000 dockerd[1281]: time="2024-09-17T08:51:31.432779139Z" level=info msg="shim disconnected" id=ea9023c6171c183716a67ae945fb7620ccdf2a62172a6e7ae25b983ef91f15df namespace=moby
	Sep 17 08:51:31 addons-190000 dockerd[1281]: time="2024-09-17T08:51:31.432912928Z" level=warning msg="cleaning up after shim disconnected" id=ea9023c6171c183716a67ae945fb7620ccdf2a62172a6e7ae25b983ef91f15df namespace=moby
	Sep 17 08:51:31 addons-190000 dockerd[1281]: time="2024-09-17T08:51:31.432954185Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 08:51:31 addons-190000 dockerd[1275]: time="2024-09-17T08:51:31.474123881Z" level=info msg="ignoring event" container=c03f5adef44327cc95356dedcf6ab06cd4ae0f5b0614fd76026db9bf8e8c4e3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:31 addons-190000 dockerd[1281]: time="2024-09-17T08:51:31.475273693Z" level=info msg="shim disconnected" id=c03f5adef44327cc95356dedcf6ab06cd4ae0f5b0614fd76026db9bf8e8c4e3b namespace=moby
	Sep 17 08:51:31 addons-190000 dockerd[1281]: time="2024-09-17T08:51:31.476971488Z" level=warning msg="cleaning up after shim disconnected" id=c03f5adef44327cc95356dedcf6ab06cd4ae0f5b0614fd76026db9bf8e8c4e3b namespace=moby
	Sep 17 08:51:31 addons-190000 dockerd[1281]: time="2024-09-17T08:51:31.477153058Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 08:51:31 addons-190000 dockerd[1275]: time="2024-09-17T08:51:31.593837349Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 17 08:51:31 addons-190000 dockerd[1275]: time="2024-09-17T08:51:31.664420811Z" level=info msg="ignoring event" container=54c622e58c9adbebb1cf096e97f9f79b4bf7f7dbfb8c2922b50ad2414bf2d7fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:31 addons-190000 dockerd[1281]: time="2024-09-17T08:51:31.664893339Z" level=info msg="shim disconnected" id=54c622e58c9adbebb1cf096e97f9f79b4bf7f7dbfb8c2922b50ad2414bf2d7fd namespace=moby
	Sep 17 08:51:31 addons-190000 dockerd[1281]: time="2024-09-17T08:51:31.664929876Z" level=warning msg="cleaning up after shim disconnected" id=54c622e58c9adbebb1cf096e97f9f79b4bf7f7dbfb8c2922b50ad2414bf2d7fd namespace=moby
	Sep 17 08:51:31 addons-190000 dockerd[1281]: time="2024-09-17T08:51:31.664936744Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 08:51:31 addons-190000 dockerd[1281]: time="2024-09-17T08:51:31.746067328Z" level=info msg="shim disconnected" id=efd3cc3353161197601c7d9948f57422ed7462f44c6f5b59101372948ac7e5c5 namespace=moby
	Sep 17 08:51:31 addons-190000 dockerd[1281]: time="2024-09-17T08:51:31.746221610Z" level=warning msg="cleaning up after shim disconnected" id=efd3cc3353161197601c7d9948f57422ed7462f44c6f5b59101372948ac7e5c5 namespace=moby
	Sep 17 08:51:31 addons-190000 dockerd[1281]: time="2024-09-17T08:51:31.746231934Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 08:51:31 addons-190000 dockerd[1275]: time="2024-09-17T08:51:31.746592809Z" level=info msg="ignoring event" container=efd3cc3353161197601c7d9948f57422ed7462f44c6f5b59101372948ac7e5c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b6a96fd9fbc3e       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            6 seconds ago       Exited              gadget                    7                   ce44229bfd6b3       gadget-smz2v
	f27b49aa3d8c2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   b90e153d731fb       gcp-auth-89d5ffd79-d5d65
	f5bf26f230296       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                0                   58d1b3133254b       ingress-nginx-controller-bc57996ff-nphh7
	5f39b883fa2d4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   c8f60f2aef423       ingress-nginx-admission-patch-mjbp9
	183897c356a8e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   efefff2646f06       ingress-nginx-admission-create-c4z6r
	26646456515e3       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       11 minutes ago      Running             local-path-provisioner    0                   37fedd895eed4       local-path-provisioner-86d989889c-whb4v
	dffa6e61659f7       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        11 minutes ago      Running             metrics-server            0                   96b5450eda7ef       metrics-server-84c5f94fbc-zzkrt
	63a349c700776       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                  12 minutes ago      Running             tiller                    0                   3b49c4ab9a6d8       tiller-deploy-b48cc5f79-x9frd
	def45a973c93c       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator    0                   29ad4dc45ed4e       cloud-spanner-emulator-769b77f747-wwtnh
	5b1fd30f69cf5       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns      0                   22b1e3a4e2c26       kube-ingress-dns-minikube
	b13c849df7e93       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   b951fd8a90689       storage-provisioner
	eccc72cd94514       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   97134f2368eab       coredns-7c65d6cfc9-f8xkv
	e40699661b668       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   13e4405902922       kube-proxy-q9nxs
	f2474170a9166       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   75a746c571f4f       kube-controller-manager-addons-190000
	da8d804123469       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   89944674dd44c       etcd-addons-190000
	c81c5bfda2548       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   9c5600d43c686       kube-apiserver-addons-190000
	5ca5b1740fe99       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   fd78959b452a1       kube-scheduler-addons-190000
	
	
	==> controller_ingress [f5bf26f23029] <==
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	W0917 08:40:16.169310       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0917 08:40:16.169510       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0917 08:40:16.173532       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/amd64"
	I0917 08:40:16.812858       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0917 08:40:16.824236       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0917 08:40:16.831529       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0917 08:40:16.841497       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"9a1ac9b6-e936-45f0-9237-f218479bceff", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0917 08:40:16.845801       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"0b146249-e322-44bb-b2e9-87a6cd8c8c14", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0917 08:40:16.845846       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"67f92952-31a5-46ec-a14d-4b9ebfec3499", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0917 08:40:18.032738       7 nginx.go:317] "Starting NGINX process"
	I0917 08:40:18.033132       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0917 08:40:18.033512       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0917 08:40:18.033782       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0917 08:40:18.043734       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0917 08:40:18.043811       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-nphh7"
	I0917 08:40:18.076045       7 controller.go:213] "Backend successfully reloaded"
	I0917 08:40:18.076670       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0917 08:40:18.077886       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-nphh7", UID:"fe3c66a3-0ca5-47f5-91f2-4ab8a40d3530", APIVersion:"v1", ResourceVersion:"1232", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0917 08:40:18.093797       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-nphh7" node="addons-190000"
	
	
	==> coredns [eccc72cd9451] <==
	[INFO] 127.0.0.1:33628 - 5391 "HINFO IN 4417194402960020393.56158118085806006. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.019025245s
	[INFO] 10.244.0.7:34662 - 48782 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000171061s
	[INFO] 10.244.0.7:34662 - 28041 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000254299s
	[INFO] 10.244.0.7:52776 - 41472 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126365s
	[INFO] 10.244.0.7:52776 - 41998 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000132776s
	[INFO] 10.244.0.7:42005 - 3799 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000073866s
	[INFO] 10.244.0.7:42005 - 31957 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000058701s
	[INFO] 10.244.0.7:57297 - 35378 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000096841s
	[INFO] 10.244.0.7:57297 - 58419 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000384256s
	[INFO] 10.244.0.7:42625 - 24184 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000096402s
	[INFO] 10.244.0.7:42625 - 46970 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00009789s
	[INFO] 10.244.0.7:38342 - 35860 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000040006s
	[INFO] 10.244.0.7:38342 - 46614 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000035012s
	[INFO] 10.244.0.7:55497 - 13386 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003643s
	[INFO] 10.244.0.7:55497 - 53576 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000091222s
	[INFO] 10.244.0.7:60569 - 25933 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000087943s
	[INFO] 10.244.0.7:60569 - 23630 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000078513s
	[INFO] 10.244.0.26:50786 - 55022 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.003443366s
	[INFO] 10.244.0.26:57373 - 56188 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.003456166s
	[INFO] 10.244.0.26:45473 - 32414 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079407s
	[INFO] 10.244.0.26:50415 - 64858 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000064495s
	[INFO] 10.244.0.26:36877 - 46376 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000054661s
	[INFO] 10.244.0.26:60894 - 41870 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085705s
	[INFO] 10.244.0.26:45545 - 25828 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.00153232s
	[INFO] 10.244.0.26:51526 - 53768 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002098744s
	
	
	==> describe nodes <==
	Name:               addons-190000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-190000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=addons-190000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T01_38_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-190000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 08:38:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-190000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 08:51:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 08:47:25 +0000   Tue, 17 Sep 2024 08:38:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 08:47:25 +0000   Tue, 17 Sep 2024 08:38:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 08:47:25 +0000   Tue, 17 Sep 2024 08:38:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 08:47:25 +0000   Tue, 17 Sep 2024 08:38:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.2
	  Hostname:    addons-190000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912944Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912944Ki
	  pods:               110
	System Info:
	  Machine ID:                 71ef61609e5d4f20bd722a48dd0c56fd
	  System UUID:                d98b4314-0000-0000-adb0-d8da382f990b
	  Boot ID:                    710bec66-0fa2-4a5c-95f6-0efed064676d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     cloud-spanner-emulator-769b77f747-wwtnh                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-smz2v                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-d5d65                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-nphh7                      100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-f8xkv                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-190000                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-190000                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-190000                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-q9nxs                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-190000                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-zzkrt                               100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 tiller-deploy-b48cc5f79-x9frd                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          helper-pod-create-pvc-504e0720-4f81-475b-a09a-542324f00b19    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  local-path-storage          local-path-provisioner-86d989889c-whb4v                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-190000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-190000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-190000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-190000 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-190000 event: Registered Node addons-190000 in Controller
	
	
	==> dmesg <==
	[  +8.032661] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.372422] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.402865] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.419194] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.055346] kauditd_printk_skb: 21 callbacks suppressed
	[Sep17 08:40] kauditd_printk_skb: 20 callbacks suppressed
	[ +12.715984] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.919590] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.921382] kauditd_printk_skb: 40 callbacks suppressed
	[Sep17 08:41] kauditd_printk_skb: 28 callbacks suppressed
	[ +23.534338] kauditd_printk_skb: 46 callbacks suppressed
	[ +23.220508] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.315454] kauditd_printk_skb: 2 callbacks suppressed
	[Sep17 08:42] kauditd_printk_skb: 33 callbacks suppressed
	[ +10.550548] kauditd_printk_skb: 20 callbacks suppressed
	[ +19.934392] kauditd_printk_skb: 2 callbacks suppressed
	[Sep17 08:46] kauditd_printk_skb: 28 callbacks suppressed
	[Sep17 08:50] kauditd_printk_skb: 28 callbacks suppressed
	[  +9.116752] kauditd_printk_skb: 21 callbacks suppressed
	[  +7.815974] kauditd_printk_skb: 7 callbacks suppressed
	[Sep17 08:51] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.934571] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.594022] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.181243] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.003125] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [da8d80412346] <==
	{"level":"info","ts":"2024-09-17T08:38:50.965188Z","caller":"traceutil/trace.go:171","msg":"trace[1027856931] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"207.780493ms","start":"2024-09-17T08:38:50.757398Z","end":"2024-09-17T08:38:50.965179Z","steps":["trace[1027856931] 'process raft request'  (duration: 207.549032ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:38:50.965297Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.979111ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-190000\" ","response":"range_response_count:1 size:4349"}
	{"level":"info","ts":"2024-09-17T08:38:50.965311Z","caller":"traceutil/trace.go:171","msg":"trace[557552489] range","detail":"{range_begin:/registry/minions/addons-190000; range_end:; response_count:1; response_revision:346; }","duration":"180.00004ms","start":"2024-09-17T08:38:50.785307Z","end":"2024-09-17T08:38:50.965307Z","steps":["trace[557552489] 'agreement among raft nodes before linearized reading'  (duration: 179.964079ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:38:50.965380Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.481215ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4034"}
	{"level":"info","ts":"2024-09-17T08:38:50.965391Z","caller":"traceutil/trace.go:171","msg":"trace[1794215061] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:346; }","duration":"179.493619ms","start":"2024-09-17T08:38:50.785894Z","end":"2024-09-17T08:38:50.965388Z","steps":["trace[1794215061] 'agreement among raft nodes before linearized reading'  (duration: 179.470809ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.230685Z","caller":"traceutil/trace.go:171","msg":"trace[1005081551] linearizableReadLoop","detail":"{readStateIndex:877; appliedIndex:876; }","duration":"120.987871ms","start":"2024-09-17T08:39:01.109647Z","end":"2024-09-17T08:39:01.230634Z","steps":["trace[1005081551] 'read index received'  (duration: 8.982133ms)","trace[1005081551] 'applied index is now lower than readState.Index'  (duration: 112.004824ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-17T08:39:01.230939Z","caller":"traceutil/trace.go:171","msg":"trace[847937240] transaction","detail":"{read_only:false; response_revision:861; number_of_response:1; }","duration":"137.749889ms","start":"2024-09-17T08:39:01.093183Z","end":"2024-09-17T08:39:01.230933Z","steps":["trace[847937240] 'process raft request'  (duration: 99.851728ms)","trace[847937240] 'compare'  (duration: 37.17877ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-17T08:39:01.230596Z","caller":"traceutil/trace.go:171","msg":"trace[1560474312] transaction","detail":"{read_only:false; response_revision:862; number_of_response:1; }","duration":"102.831795ms","start":"2024-09-17T08:39:01.127756Z","end":"2024-09-17T08:39:01.230587Z","steps":["trace[1560474312] 'process raft request'  (duration: 102.52229ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:39:01.231369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.71619ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:39:01.232359Z","caller":"traceutil/trace.go:171","msg":"trace[1370722757] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:862; }","duration":"122.710205ms","start":"2024-09-17T08:39:01.109643Z","end":"2024-09-17T08:39:01.232353Z","steps":["trace[1370722757] 'agreement among raft nodes before linearized reading'  (duration: 121.697689ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:02.153893Z","caller":"traceutil/trace.go:171","msg":"trace[147820133] linearizableReadLoop","detail":"{readStateIndex:923; appliedIndex:921; }","duration":"218.418041ms","start":"2024-09-17T08:39:01.935467Z","end":"2024-09-17T08:39:02.153885Z","steps":["trace[147820133] 'read index received'  (duration: 8.631432ms)","trace[147820133] 'applied index is now lower than readState.Index'  (duration: 209.785447ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-17T08:39:02.154023Z","caller":"traceutil/trace.go:171","msg":"trace[1713566821] transaction","detail":"{read_only:false; response_revision:909; number_of_response:1; }","duration":"198.622617ms","start":"2024-09-17T08:39:01.955356Z","end":"2024-09-17T08:39:02.153978Z","steps":["trace[1713566821] 'process raft request'  (duration: 198.393454ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:39:02.154137Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.66657ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/csi-hostpath-resizer-0\" ","response":"range_response_count:1 size:2632"}
	{"level":"info","ts":"2024-09-17T08:39:02.154153Z","caller":"traceutil/trace.go:171","msg":"trace[849532434] range","detail":"{range_begin:/registry/pods/kube-system/csi-hostpath-resizer-0; range_end:; response_count:1; response_revision:910; }","duration":"192.687826ms","start":"2024-09-17T08:39:01.961461Z","end":"2024-09-17T08:39:02.154149Z","steps":["trace[849532434] 'agreement among raft nodes before linearized reading'  (duration: 192.630871ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:39:02.154243Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.773732ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:39:02.154296Z","caller":"traceutil/trace.go:171","msg":"trace[1541457123] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:910; }","duration":"218.826605ms","start":"2024-09-17T08:39:01.935465Z","end":"2024-09-17T08:39:02.154291Z","steps":["trace[1541457123] 'agreement among raft nodes before linearized reading'  (duration: 218.767431ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:02.153898Z","caller":"traceutil/trace.go:171","msg":"trace[650938521] transaction","detail":"{read_only:false; response_revision:908; number_of_response:1; }","duration":"215.94351ms","start":"2024-09-17T08:39:01.937920Z","end":"2024-09-17T08:39:02.153863Z","steps":["trace[650938521] 'process raft request'  (duration: 215.792341ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:39:02.154415Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.890822ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:39:02.154428Z","caller":"traceutil/trace.go:171","msg":"trace[2060566434] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:910; }","duration":"192.904896ms","start":"2024-09-17T08:39:01.961519Z","end":"2024-09-17T08:39:02.154424Z","steps":["trace[2060566434] 'agreement among raft nodes before linearized reading'  (duration: 192.88502ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:02.153918Z","caller":"traceutil/trace.go:171","msg":"trace[1971169470] transaction","detail":"{read_only:false; response_revision:907; number_of_response:1; }","duration":"218.484631ms","start":"2024-09-17T08:39:01.935429Z","end":"2024-09-17T08:39:02.153914Z","steps":["trace[1971169470] 'process raft request'  (duration: 181.796917ms)","trace[1971169470] 'compare'  (duration: 36.373785ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-17T08:39:25.884077Z","caller":"traceutil/trace.go:171","msg":"trace[356912023] transaction","detail":"{read_only:false; response_revision:1021; number_of_response:1; }","duration":"124.684155ms","start":"2024-09-17T08:39:25.759383Z","end":"2024-09-17T08:39:25.884068Z","steps":["trace[356912023] 'process raft request'  (duration: 124.550899ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:48:41.826728Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1866}
	{"level":"info","ts":"2024-09-17T08:48:41.878447Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1866,"took":"51.063987ms","hash":2289264749,"current-db-size-bytes":9273344,"current-db-size":"9.3 MB","current-db-size-in-use-bytes":5050368,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2024-09-17T08:48:41.878498Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2289264749,"revision":1866,"compact-revision":-1}
	{"level":"info","ts":"2024-09-17T08:51:32.982127Z","caller":"traceutil/trace.go:171","msg":"trace[1269585919] transaction","detail":"{read_only:false; response_revision:2802; number_of_response:1; }","duration":"109.112365ms","start":"2024-09-17T08:51:32.873002Z","end":"2024-09-17T08:51:32.982114Z","steps":["trace[1269585919] 'process raft request'  (duration: 52.478023ms)","trace[1269585919] 'compare'  (duration: 56.493014ms)"],"step_count":2}
	
	
	==> gcp-auth [f27b49aa3d8c] <==
	2024/09/17 08:41:34 GCP Auth Webhook started!
	2024/09/17 08:41:51 Ready to marshal response ...
	2024/09/17 08:41:51 Ready to write response ...
	2024/09/17 08:41:52 Ready to marshal response ...
	2024/09/17 08:41:52 Ready to write response ...
	2024/09/17 08:42:17 Ready to marshal response ...
	2024/09/17 08:42:17 Ready to write response ...
	2024/09/17 08:42:17 Ready to marshal response ...
	2024/09/17 08:42:17 Ready to write response ...
	2024/09/17 08:42:17 Ready to marshal response ...
	2024/09/17 08:42:17 Ready to write response ...
	2024/09/17 08:50:31 Ready to marshal response ...
	2024/09/17 08:50:31 Ready to write response ...
	2024/09/17 08:50:40 Ready to marshal response ...
	2024/09/17 08:50:40 Ready to write response ...
	2024/09/17 08:50:57 Ready to marshal response ...
	2024/09/17 08:50:57 Ready to write response ...
	2024/09/17 08:51:30 Ready to marshal response ...
	2024/09/17 08:51:30 Ready to write response ...
	2024/09/17 08:51:30 Ready to marshal response ...
	2024/09/17 08:51:30 Ready to write response ...
	
	
	==> kernel <==
	 08:51:33 up 13 min,  0 users,  load average: 0.65, 0.76, 0.60
	Linux addons-190000 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c81c5bfda254] <==
	I0917 08:42:08.090208       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0917 08:42:08.114933       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0917 08:42:08.217197       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0917 08:42:08.471915       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0917 08:42:08.495240       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0917 08:42:08.518917       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0917 08:42:08.949161       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0917 08:42:09.217870       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0917 08:42:09.413451       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0917 08:42:09.413454       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0917 08:42:09.438606       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0917 08:42:09.520028       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0917 08:42:09.726304       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0917 08:50:47.153170       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0917 08:51:13.314698       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 08:51:13.315198       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 08:51:13.331642       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 08:51:13.331689       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 08:51:13.336873       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 08:51:13.337383       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 08:51:13.354651       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 08:51:13.354697       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0917 08:51:14.332696       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0917 08:51:14.494220       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0917 08:51:14.494465       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [f2474170a916] <==
	I0917 08:51:20.024245       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="4.308µs"
	I0917 08:51:20.062376       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0917 08:51:20.062532       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 08:51:20.291193       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0917 08:51:20.291235       1 shared_informer.go:320] Caches are synced for garbage collector
	W0917 08:51:22.061549       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:22.061725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:51:22.689418       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:22.689499       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:51:23.001870       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:23.001900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:51:23.064462       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:23.064872       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:51:26.928232       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:26.928276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:51:27.556214       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:27.556276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 08:51:30.097142       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0917 08:51:31.388822       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="3.514µs"
	W0917 08:51:31.414920       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:31.415118       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:51:32.107205       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:32.107250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:51:33.612410       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:33.612455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [e40699661b66] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 08:38:49.632107       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 08:38:49.640834       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.2"]
	E0917 08:38:49.640976       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 08:38:49.693355       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 08:38:49.693376       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 08:38:49.693391       1 server_linux.go:169] "Using iptables Proxier"
	I0917 08:38:49.695483       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 08:38:49.695738       1 server.go:483] "Version info" version="v1.31.1"
	I0917 08:38:49.695747       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 08:38:49.696871       1 config.go:199] "Starting service config controller"
	I0917 08:38:49.696877       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 08:38:49.696887       1 config.go:105] "Starting endpoint slice config controller"
	I0917 08:38:49.696889       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 08:38:49.698306       1 config.go:328] "Starting node config controller"
	I0917 08:38:49.700168       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 08:38:49.798037       1 shared_informer.go:320] Caches are synced for service config
	I0917 08:38:49.798270       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 08:38:49.800839       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5ca5b1740fe9] <==
	W0917 08:38:41.726049       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 08:38:41.727303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:41.726164       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 08:38:41.727479       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:42.611414       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 08:38:42.611593       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0917 08:38:42.615144       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 08:38:42.615185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:42.638468       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 08:38:42.638513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:42.684105       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 08:38:42.684150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:42.719731       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 08:38:42.719941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:42.807659       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 08:38:42.807835       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:42.826041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 08:38:42.826086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:42.845991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 08:38:42.846102       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:42.851121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 08:38:42.851236       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:42.862670       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 08:38:42.862870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0917 08:38:44.615846       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 08:51:31 addons-190000 kubelet[2050]: I0917 08:51:31.135928    2050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"8d3cca5c6e680f6b27b282e16e2b2a45d53f35215e1c7e572bbefe8bc6b1f4a6"} err="failed to get container status \"8d3cca5c6e680f6b27b282e16e2b2a45d53f35215e1c7e572bbefe8bc6b1f4a6\": rpc error: code = Unknown desc = Error response from daemon: No such container: 8d3cca5c6e680f6b27b282e16e2b2a45d53f35215e1c7e572bbefe8bc6b1f4a6"
	Sep 17 08:51:31 addons-190000 kubelet[2050]: I0917 08:51:31.225498    2050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9skzx\" (UniqueName: \"kubernetes.io/projected/5379f7d0-ce2e-4fe2-82d9-386a10d56ad7-kube-api-access-9skzx\") pod \"5379f7d0-ce2e-4fe2-82d9-386a10d56ad7\" (UID: \"5379f7d0-ce2e-4fe2-82d9-386a10d56ad7\") "
	Sep 17 08:51:31 addons-190000 kubelet[2050]: I0917 08:51:31.225534    2050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5379f7d0-ce2e-4fe2-82d9-386a10d56ad7-gcp-creds\") pod \"5379f7d0-ce2e-4fe2-82d9-386a10d56ad7\" (UID: \"5379f7d0-ce2e-4fe2-82d9-386a10d56ad7\") "
	Sep 17 08:51:31 addons-190000 kubelet[2050]: I0917 08:51:31.225588    2050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5379f7d0-ce2e-4fe2-82d9-386a10d56ad7-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5379f7d0-ce2e-4fe2-82d9-386a10d56ad7" (UID: "5379f7d0-ce2e-4fe2-82d9-386a10d56ad7"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 17 08:51:31 addons-190000 kubelet[2050]: I0917 08:51:31.227239    2050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5379f7d0-ce2e-4fe2-82d9-386a10d56ad7-kube-api-access-9skzx" (OuterVolumeSpecName: "kube-api-access-9skzx") pod "5379f7d0-ce2e-4fe2-82d9-386a10d56ad7" (UID: "5379f7d0-ce2e-4fe2-82d9-386a10d56ad7"). InnerVolumeSpecName "kube-api-access-9skzx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 08:51:31 addons-190000 kubelet[2050]: I0917 08:51:31.325737    2050 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5379f7d0-ce2e-4fe2-82d9-386a10d56ad7-gcp-creds\") on node \"addons-190000\" DevicePath \"\""
	Sep 17 08:51:31 addons-190000 kubelet[2050]: I0917 08:51:31.325847    2050 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9skzx\" (UniqueName: \"kubernetes.io/projected/5379f7d0-ce2e-4fe2-82d9-386a10d56ad7-kube-api-access-9skzx\") on node \"addons-190000\" DevicePath \"\""
	Sep 17 08:51:31 addons-190000 kubelet[2050]: I0917 08:51:31.932896    2050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsbwp\" (UniqueName: \"kubernetes.io/projected/2d8d4a63-2d55-49da-9763-4fb31b7dc6c9-kube-api-access-dsbwp\") pod \"2d8d4a63-2d55-49da-9763-4fb31b7dc6c9\" (UID: \"2d8d4a63-2d55-49da-9763-4fb31b7dc6c9\") "
	Sep 17 08:51:31 addons-190000 kubelet[2050]: I0917 08:51:31.932952    2050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbjtg\" (UniqueName: \"kubernetes.io/projected/5ebe9b61-99d8-42d6-9925-57fe4224f525-kube-api-access-jbjtg\") pod \"5ebe9b61-99d8-42d6-9925-57fe4224f525\" (UID: \"5ebe9b61-99d8-42d6-9925-57fe4224f525\") "
	Sep 17 08:51:31 addons-190000 kubelet[2050]: I0917 08:51:31.950970    2050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebe9b61-99d8-42d6-9925-57fe4224f525-kube-api-access-jbjtg" (OuterVolumeSpecName: "kube-api-access-jbjtg") pod "5ebe9b61-99d8-42d6-9925-57fe4224f525" (UID: "5ebe9b61-99d8-42d6-9925-57fe4224f525"). InnerVolumeSpecName "kube-api-access-jbjtg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 08:51:31 addons-190000 kubelet[2050]: I0917 08:51:31.951309    2050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d8d4a63-2d55-49da-9763-4fb31b7dc6c9-kube-api-access-dsbwp" (OuterVolumeSpecName: "kube-api-access-dsbwp") pod "2d8d4a63-2d55-49da-9763-4fb31b7dc6c9" (UID: "2d8d4a63-2d55-49da-9763-4fb31b7dc6c9"). InnerVolumeSpecName "kube-api-access-dsbwp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 08:51:32 addons-190000 kubelet[2050]: I0917 08:51:32.034304    2050 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jbjtg\" (UniqueName: \"kubernetes.io/projected/5ebe9b61-99d8-42d6-9925-57fe4224f525-kube-api-access-jbjtg\") on node \"addons-190000\" DevicePath \"\""
	Sep 17 08:51:32 addons-190000 kubelet[2050]: I0917 08:51:32.034419    2050 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dsbwp\" (UniqueName: \"kubernetes.io/projected/2d8d4a63-2d55-49da-9763-4fb31b7dc6c9-kube-api-access-dsbwp\") on node \"addons-190000\" DevicePath \"\""
	Sep 17 08:51:32 addons-190000 kubelet[2050]: I0917 08:51:32.073854    2050 scope.go:117] "RemoveContainer" containerID="c03f5adef44327cc95356dedcf6ab06cd4ae0f5b0614fd76026db9bf8e8c4e3b"
	Sep 17 08:51:32 addons-190000 kubelet[2050]: I0917 08:51:32.145900    2050 scope.go:117] "RemoveContainer" containerID="c03f5adef44327cc95356dedcf6ab06cd4ae0f5b0614fd76026db9bf8e8c4e3b"
	Sep 17 08:51:32 addons-190000 kubelet[2050]: E0917 08:51:32.146666    2050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: c03f5adef44327cc95356dedcf6ab06cd4ae0f5b0614fd76026db9bf8e8c4e3b" containerID="c03f5adef44327cc95356dedcf6ab06cd4ae0f5b0614fd76026db9bf8e8c4e3b"
	Sep 17 08:51:32 addons-190000 kubelet[2050]: I0917 08:51:32.146774    2050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c03f5adef44327cc95356dedcf6ab06cd4ae0f5b0614fd76026db9bf8e8c4e3b"} err="failed to get container status \"c03f5adef44327cc95356dedcf6ab06cd4ae0f5b0614fd76026db9bf8e8c4e3b\": rpc error: code = Unknown desc = Error response from daemon: No such container: c03f5adef44327cc95356dedcf6ab06cd4ae0f5b0614fd76026db9bf8e8c4e3b"
	Sep 17 08:51:32 addons-190000 kubelet[2050]: I0917 08:51:32.146794    2050 scope.go:117] "RemoveContainer" containerID="ea9023c6171c183716a67ae945fb7620ccdf2a62172a6e7ae25b983ef91f15df"
	Sep 17 08:51:32 addons-190000 kubelet[2050]: I0917 08:51:32.249375    2050 scope.go:117] "RemoveContainer" containerID="ea9023c6171c183716a67ae945fb7620ccdf2a62172a6e7ae25b983ef91f15df"
	Sep 17 08:51:32 addons-190000 kubelet[2050]: E0917 08:51:32.250266    2050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ea9023c6171c183716a67ae945fb7620ccdf2a62172a6e7ae25b983ef91f15df" containerID="ea9023c6171c183716a67ae945fb7620ccdf2a62172a6e7ae25b983ef91f15df"
	Sep 17 08:51:32 addons-190000 kubelet[2050]: I0917 08:51:32.250289    2050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ea9023c6171c183716a67ae945fb7620ccdf2a62172a6e7ae25b983ef91f15df"} err="failed to get container status \"ea9023c6171c183716a67ae945fb7620ccdf2a62172a6e7ae25b983ef91f15df\": rpc error: code = Unknown desc = Error response from daemon: No such container: ea9023c6171c183716a67ae945fb7620ccdf2a62172a6e7ae25b983ef91f15df"
	Sep 17 08:51:32 addons-190000 kubelet[2050]: I0917 08:51:32.877728    2050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d8d4a63-2d55-49da-9763-4fb31b7dc6c9" path="/var/lib/kubelet/pods/2d8d4a63-2d55-49da-9763-4fb31b7dc6c9/volumes"
	Sep 17 08:51:32 addons-190000 kubelet[2050]: I0917 08:51:32.878190    2050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5379f7d0-ce2e-4fe2-82d9-386a10d56ad7" path="/var/lib/kubelet/pods/5379f7d0-ce2e-4fe2-82d9-386a10d56ad7/volumes"
	Sep 17 08:51:32 addons-190000 kubelet[2050]: I0917 08:51:32.878407    2050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebe9b61-99d8-42d6-9925-57fe4224f525" path="/var/lib/kubelet/pods/5ebe9b61-99d8-42d6-9925-57fe4224f525/volumes"
	Sep 17 08:51:32 addons-190000 kubelet[2050]: I0917 08:51:32.878767    2050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76589ce8-f1e9-4d47-98e3-18f0b6b25a2d" path="/var/lib/kubelet/pods/76589ce8-f1e9-4d47-98e3-18f0b6b25a2d/volumes"
	
	
	==> storage-provisioner [b13c849df7e9] <==
	I0917 08:38:57.650550       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 08:38:57.793787       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 08:38:57.793813       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 08:38:57.983711       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 08:38:57.983899       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-190000_222ddf13-85eb-4af8-abab-5b340b5abb9c!
	I0917 08:38:57.984895       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"828ff135-4356-40d8-971b-24365828648d", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-190000_222ddf13-85eb-4af8-abab-5b340b5abb9c became leader
	I0917 08:38:58.084444       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-190000_222ddf13-85eb-4af8-abab-5b340b5abb9c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p addons-190000 -n addons-190000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-190000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox test-local-path ingress-nginx-admission-create-c4z6r ingress-nginx-admission-patch-mjbp9 helper-pod-create-pvc-504e0720-4f81-475b-a09a-542324f00b19
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-190000 describe pod busybox test-local-path ingress-nginx-admission-create-c4z6r ingress-nginx-admission-patch-mjbp9 helper-pod-create-pvc-504e0720-4f81-475b-a09a-542324f00b19
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-190000 describe pod busybox test-local-path ingress-nginx-admission-create-c4z6r ingress-nginx-admission-patch-mjbp9 helper-pod-create-pvc-504e0720-4f81-475b-a09a-542324f00b19: exit status 1 (75.131848ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-190000/192.169.0.2
	Start Time:       Tue, 17 Sep 2024 01:42:17 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mcvgk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mcvgk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m17s                   default-scheduler  Successfully assigned default/busybox to addons-190000
	  Normal   Pulling    7m47s (x4 over 9m16s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m46s (x4 over 9m16s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m46s (x4 over 9m16s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m18s (x6 over 9m15s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m14s (x20 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vbmpr (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-vbmpr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-c4z6r" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mjbp9" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-504e0720-4f81-475b-a09a-542324f00b19" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-190000 describe pod busybox test-local-path ingress-nginx-admission-create-c4z6r ingress-nginx-admission-patch-mjbp9 helper-pod-create-pvc-504e0720-4f81-475b-a09a-542324f00b19: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.05s)

                                                
                                    
x
+
TestCertOptions (251.8s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-583000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E0917 02:47:58.339663    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:48:26.059258    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:48:59.184228    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-583000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 80 (4m6.117873715s)

                                                
                                                
-- stdout --
	* [cert-options-583000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-583000" primary control-plane node in "cert-options-583000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-options-583000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for be:d4:52:6f:5d:c0
	* Failed to start hyperkit VM. Running "minikube delete -p cert-options-583000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:e4:85:2f:3e:a0
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:e4:85:2f:3e:a0
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-583000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-583000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-583000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 50 (162.149509ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-583000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-583000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 50
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-583000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-583000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-583000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 50 (164.496388ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-583000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-583000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 50
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-583000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-17 02:50:51.978857 -0700 PDT m=+4398.068025547
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-583000 -n cert-options-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-583000 -n cert-options-583000: exit status 7 (78.961075ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 02:50:52.056138    6855 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0917 02:50:52.056159    6855 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-583000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-options-583000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-583000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-583000: (5.239547465s)
--- FAIL: TestCertOptions (251.80s)

                                                
                                    
x
+
TestCertExpiration (1722.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-234000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E0917 02:45:42.215927    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:46:18.928402    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:46:35.851319    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-234000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 80 (4m6.561383965s)

                                                
                                                
-- stdout --
	* [cert-expiration-234000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-expiration-234000" primary control-plane node in "cert-expiration-234000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-expiration-234000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2e:68:5e:4:5a:ea
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-234000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for be:40:c0:2c:54:f7
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for be:40:c0:2c:54:f7
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-234000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-234000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0917 02:52:58.372555    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-234000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : exit status 80 (21m30.351343447s)

                                                
                                                
-- stdout --
	* [cert-expiration-234000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-234000" primary control-plane node in "cert-expiration-234000" cluster
	* Updating the running hyperkit "cert-expiration-234000" VM ...
	* Updating the running hyperkit "cert-expiration-234000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-234000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-234000 --memory=2048 --cert-expiration=8760h --driver=hyperkit " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-234000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-234000" primary control-plane node in "cert-expiration-234000" cluster
	* Updating the running hyperkit "cert-expiration-234000" VM ...
	* Updating the running hyperkit "cert-expiration-234000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-234000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-17 03:14:19.182854 -0700 PDT m=+5805.175871634
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-234000 -n cert-expiration-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-234000 -n cert-expiration-234000: exit status 7 (79.935204ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 03:14:19.260734    8369 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0917 03:14:19.260755    8369 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-234000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-expiration-234000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-234000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-234000: (5.267585086s)
--- FAIL: TestCertExpiration (1722.26s)

                                                
                                    
x
+
TestDockerFlags (252.21s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-802000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E0917 02:42:58.340170    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:42:58.347895    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:42:58.359388    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:42:58.382250    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:42:58.424447    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:42:58.507848    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:42:58.671222    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:42:58.993390    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:42:59.636869    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:43:00.918671    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:43:03.480922    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:43:08.604356    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:43:18.847733    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:43:39.330024    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:43:59.182135    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:44:20.293590    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-802000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.447816026s)

                                                
                                                
-- stdout --
	* [docker-flags-802000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "docker-flags-802000" primary control-plane node in "docker-flags-802000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "docker-flags-802000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:42:33.310601    6657 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:42:33.310806    6657 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:42:33.310812    6657 out.go:358] Setting ErrFile to fd 2...
	I0917 02:42:33.310816    6657 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:42:33.311009    6657 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:42:33.312444    6657 out.go:352] Setting JSON to false
	I0917 02:42:33.335558    6657 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4323,"bootTime":1726561830,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 02:42:33.335743    6657 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:42:33.357605    6657 out.go:177] * [docker-flags-802000] minikube v1.34.0 on Darwin 14.6.1
	I0917 02:42:33.399831    6657 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:42:33.399861    6657 notify.go:220] Checking for updates...
	I0917 02:42:33.440722    6657 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:42:33.462714    6657 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 02:42:33.483487    6657 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:42:33.503729    6657 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:42:33.524766    6657 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:42:33.546177    6657 config.go:182] Loaded profile config "force-systemd-flag-972000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:42:33.546276    6657 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:42:33.574775    6657 out.go:177] * Using the hyperkit driver based on user configuration
	I0917 02:42:33.615589    6657 start.go:297] selected driver: hyperkit
	I0917 02:42:33.615603    6657 start.go:901] validating driver "hyperkit" against <nil>
	I0917 02:42:33.615614    6657 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:42:33.618679    6657 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:42:33.618808    6657 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19648-1025/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 02:42:33.627288    6657 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 02:42:33.631264    6657 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:42:33.631284    6657 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 02:42:33.631325    6657 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:42:33.631550    6657 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0917 02:42:33.631586    6657 cni.go:84] Creating CNI manager for ""
	I0917 02:42:33.631625    6657 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:42:33.631636    6657 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 02:42:33.631694    6657 start.go:340] cluster config:
	{Name:docker-flags-802000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-802000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:42:33.631783    6657 iso.go:125] acquiring lock: {Name:mkc407d80a0f8e78f0e24d63c464bb315c80139b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:42:33.673710    6657 out.go:177] * Starting "docker-flags-802000" primary control-plane node in "docker-flags-802000" cluster
	I0917 02:42:33.694726    6657 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:42:33.694762    6657 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 02:42:33.694777    6657 cache.go:56] Caching tarball of preloaded images
	I0917 02:42:33.694902    6657 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:42:33.694912    6657 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:42:33.694984    6657 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/docker-flags-802000/config.json ...
	I0917 02:42:33.695001    6657 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/docker-flags-802000/config.json: {Name:mk77c6c1cd4c52a74d1d7a6da67168a02b5c43d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:42:33.695313    6657 start.go:360] acquireMachinesLock for docker-flags-802000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:43:30.616383    6657 start.go:364] duration metric: took 56.920795838s to acquireMachinesLock for "docker-flags-802000"
	I0917 02:43:30.616418    6657 start.go:93] Provisioning new machine with config: &{Name:docker-flags-802000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-802000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:43:30.616469    6657 start.go:125] createHost starting for "" (driver="hyperkit")
	I0917 02:43:30.637816    6657 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 02:43:30.637954    6657 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:43:30.637990    6657 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:43:30.646374    6657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54109
	I0917 02:43:30.646732    6657 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:43:30.647120    6657 main.go:141] libmachine: Using API Version  1
	I0917 02:43:30.647129    6657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:43:30.647349    6657 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:43:30.647482    6657 main.go:141] libmachine: (docker-flags-802000) Calling .GetMachineName
	I0917 02:43:30.647624    6657 main.go:141] libmachine: (docker-flags-802000) Calling .DriverName
	I0917 02:43:30.647745    6657 start.go:159] libmachine.API.Create for "docker-flags-802000" (driver="hyperkit")
	I0917 02:43:30.647772    6657 client.go:168] LocalClient.Create starting
	I0917 02:43:30.647801    6657 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem
	I0917 02:43:30.647858    6657 main.go:141] libmachine: Decoding PEM data...
	I0917 02:43:30.647874    6657 main.go:141] libmachine: Parsing certificate...
	I0917 02:43:30.647926    6657 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem
	I0917 02:43:30.647964    6657 main.go:141] libmachine: Decoding PEM data...
	I0917 02:43:30.647975    6657 main.go:141] libmachine: Parsing certificate...
	I0917 02:43:30.647987    6657 main.go:141] libmachine: Running pre-create checks...
	I0917 02:43:30.647996    6657 main.go:141] libmachine: (docker-flags-802000) Calling .PreCreateCheck
	I0917 02:43:30.648081    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:30.648239    6657 main.go:141] libmachine: (docker-flags-802000) Calling .GetConfigRaw
	I0917 02:43:30.679894    6657 main.go:141] libmachine: Creating machine...
	I0917 02:43:30.679903    6657 main.go:141] libmachine: (docker-flags-802000) Calling .Create
	I0917 02:43:30.679985    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:30.680102    6657 main.go:141] libmachine: (docker-flags-802000) DBG | I0917 02:43:30.679975    6682 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:43:30.680148    6657 main.go:141] libmachine: (docker-flags-802000) Downloading /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1025/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0917 02:43:30.912091    6657 main.go:141] libmachine: (docker-flags-802000) DBG | I0917 02:43:30.911979    6682 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/id_rsa...
	I0917 02:43:30.973772    6657 main.go:141] libmachine: (docker-flags-802000) DBG | I0917 02:43:30.973707    6682 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/docker-flags-802000.rawdisk...
	I0917 02:43:30.973782    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Writing magic tar header
	I0917 02:43:30.973790    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Writing SSH key tar header
	I0917 02:43:30.974402    6657 main.go:141] libmachine: (docker-flags-802000) DBG | I0917 02:43:30.974339    6682 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000 ...
	I0917 02:43:31.350154    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:31.350175    6657 main.go:141] libmachine: (docker-flags-802000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/hyperkit.pid
	I0917 02:43:31.350189    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Using UUID adced3fd-ab40-4ffc-9039-d6ffb11fa20d
	I0917 02:43:31.375589    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Generated MAC 5e:68:bd:79:a7:32
	I0917 02:43:31.375608    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-802000
	I0917 02:43:31.375650    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"adced3fd-ab40-4ffc-9039-d6ffb11fa20d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d4240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0917 02:43:31.375681    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"adced3fd-ab40-4ffc-9039-d6ffb11fa20d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d4240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0917 02:43:31.375718    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "adced3fd-ab40-4ffc-9039-d6ffb11fa20d", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/docker-flags-802000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/bzimage,/Users/jenkins/m
inikube-integration/19648-1025/.minikube/machines/docker-flags-802000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-802000"}
	I0917 02:43:31.375748    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U adced3fd-ab40-4ffc-9039-d6ffb11fa20d -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/docker-flags-802000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags
-802000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-802000"
	I0917 02:43:31.375773    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:43:31.378701    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 DEBUG: hyperkit: Pid is 6684
	I0917 02:43:31.379738    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 0
	I0917 02:43:31.379755    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:31.379829    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:43:31.380782    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:43:31.380834    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:31.380850    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:31.380873    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:31.380885    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:31.380899    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:31.380925    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:31.380939    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:31.380950    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:31.380979    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:31.381000    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:31.381020    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:31.381029    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:31.381047    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:31.381067    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:31.381081    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:31.381096    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:31.381108    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:31.381123    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:31.381136    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:31.386782    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:43:31.395008    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:43:31.395983    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:43:31.396002    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:43:31.396010    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:43:31.396016    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:43:31.769580    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:43:31.769594    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:43:31.884188    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:43:31.884205    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:43:31.884216    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:43:31.884225    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:43:31.885079    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:43:31.885108    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:43:33.381869    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 1
	I0917 02:43:33.381885    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:33.381930    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:43:33.382732    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:43:33.382781    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:33.382794    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:33.382811    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:33.382835    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:33.382849    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:33.382858    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:33.382867    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:33.382873    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:33.382879    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:33.382886    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:33.382898    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:33.382906    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:33.382913    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:33.382919    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:33.382925    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:33.382933    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:33.382939    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:33.382945    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:33.382955    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:35.384797    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 2
	I0917 02:43:35.384811    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:35.384916    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:43:35.385690    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:43:35.385763    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:35.385775    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:35.385784    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:35.385790    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:35.385815    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:35.385827    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:35.385834    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:35.385842    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:35.385858    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:35.385869    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:35.385878    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:35.385886    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:35.385896    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:35.385904    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:35.385915    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:35.385924    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:35.385931    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:35.385937    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:35.385943    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:37.271728    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 02:43:37.271863    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 02:43:37.271874    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 02:43:37.291576    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:43:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 02:43:37.386450    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 3
	I0917 02:43:37.386477    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:37.386671    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:43:37.388110    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:43:37.388205    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:37.388225    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:37.388241    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:37.388253    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:37.388264    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:37.388275    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:37.388288    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:37.388300    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:37.388311    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:37.388321    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:37.388380    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:37.388429    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:37.388454    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:37.388470    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:37.388481    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:37.388492    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:37.388512    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:37.388531    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:37.388562    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:39.389008    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 4
	I0917 02:43:39.389026    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:39.389125    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:43:39.389905    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:43:39.389959    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:39.389975    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:39.389985    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:39.389994    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:39.390002    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:39.390007    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:39.390013    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:39.390031    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:39.390037    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:39.390044    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:39.390053    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:39.390068    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:39.390080    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:39.390088    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:39.390094    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:39.390100    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:39.390106    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:39.390127    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:39.390137    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:41.391885    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 5
	I0917 02:43:41.391899    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:41.391975    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:43:41.392775    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:43:41.392826    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:41.392834    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:41.392843    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:41.392855    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:41.392865    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:41.392873    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:41.392879    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:41.392888    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:41.392895    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:41.392902    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:41.392908    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:41.392922    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:41.392930    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:41.392937    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:41.392944    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:41.392950    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:41.392957    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:41.392965    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:41.392976    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:43.393486    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 6
	I0917 02:43:43.393498    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:43.393569    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:43:43.394403    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:43:43.394451    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:43.394463    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:43.394472    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:43.394477    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:43.394493    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:43.394503    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:43.394509    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:43.394516    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:43.394523    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:43.394530    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:43.394539    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:43.394547    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:43.394553    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:43.394560    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:43.394586    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:43.394598    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:43.394605    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:43.394611    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:43.394618    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:45.396661    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 7
	I0917 02:43:45.396672    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:45.396724    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:43:45.397566    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:43:45.397608    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:45.397618    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:45.397629    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:45.397635    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:45.397642    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:45.397654    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:45.397661    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:45.397669    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:45.397687    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:45.397693    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:45.397700    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:45.397707    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:45.397714    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:45.397721    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:45.397731    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:45.397739    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:45.397751    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:45.397761    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:45.397771    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:47.398369    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 8
	I0917 02:43:47.398382    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:47.398458    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:43:47.399278    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:43:47.399341    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:47.399352    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:47.399368    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:47.399378    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:47.399386    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:47.399393    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:47.399399    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:47.399407    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:47.399413    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:47.399419    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:47.399425    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:47.399432    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:47.399439    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:47.399446    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:47.399452    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:47.399468    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:47.399475    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:47.399481    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:47.399489    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:49.401368    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 9
	I0917 02:43:49.401383    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:49.401461    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:43:49.402250    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:43:49.402296    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:49.402310    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:49.402331    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:49.402338    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:49.402345    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:49.402352    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:49.402358    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:49.402364    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:49.402371    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:49.402377    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:49.402383    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:49.402390    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:49.402396    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:49.402406    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:49.402414    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:49.402420    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:49.402427    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:49.402433    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:49.402439    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:51.404563    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 10
	I0917 02:43:51.404578    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:51.404643    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:43:51.405459    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:43:51.405516    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:51.405527    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:51.405538    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:51.405546    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:51.405553    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:51.405562    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:51.405574    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:51.405581    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:51.405587    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:51.405598    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:51.405623    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:51.405635    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:51.405642    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:51.405648    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:51.405656    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:51.405664    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:51.405670    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:51.405677    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:51.405686    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:53.406806    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 11
	I0917 02:43:53.406841    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:53.406849    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:43:53.407628    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:43:53.407688    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:53.407713    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:53.407721    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:53.407728    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:53.407735    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:53.407749    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:53.407761    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:53.407768    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:53.407775    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:53.407782    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:53.407788    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:53.407794    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:53.407808    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:53.407817    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:53.407823    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:53.407831    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:53.407837    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:53.407845    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:53.407860    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:55.409900    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 12
	I0917 02:43:55.409918    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:55.409951    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:43:55.410729    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:43:55.410796    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:55.410806    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:55.410817    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:55.410825    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:55.410835    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:55.410841    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:55.410869    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:55.410882    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:55.410890    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:55.410897    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:55.410913    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:55.410925    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:55.410933    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:55.410941    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:55.410957    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:55.410967    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:55.410979    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:55.410987    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:55.410994    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:57.412977    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 13
	I0917 02:43:57.412992    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:57.413059    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:43:57.413824    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:43:57.413874    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:57.413882    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:57.413890    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:57.413896    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:57.413903    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:57.413910    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:57.413918    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:57.413924    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:57.413943    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:57.413958    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:57.413966    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:57.413980    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:57.413987    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:57.413994    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:57.414003    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:57.414011    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:57.414017    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:57.414025    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:57.414038    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:59.416092    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 14
	I0917 02:43:59.416104    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:59.416146    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:43:59.417153    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:43:59.417198    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:59.417207    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:59.417231    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:59.417238    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:59.417248    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:59.417255    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:59.417261    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:59.417269    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:59.417281    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:59.417292    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:59.417298    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:59.417306    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:59.417320    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:59.417333    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:59.417351    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:59.417362    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:59.417377    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:59.417386    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:59.417400    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:01.419416    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 15
	I0917 02:44:01.419430    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:01.419493    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:01.420281    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:44:01.420336    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:01.420348    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:01.420358    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:01.420367    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:01.420374    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:01.420379    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:01.420398    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:01.420403    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:01.420409    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:01.420419    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:01.420425    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:01.420433    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:01.420439    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:01.420446    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:01.420460    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:01.420470    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:01.420478    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:01.420486    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:01.420494    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:03.422541    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 16
	I0917 02:44:03.422554    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:03.422612    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:03.423446    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:44:03.423498    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:03.423508    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:03.423517    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:03.423523    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:03.423538    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:03.423547    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:03.423553    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:03.423561    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:03.423567    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:03.423575    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:03.423581    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:03.423588    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:03.423595    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:03.423604    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:03.423621    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:03.423632    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:03.423649    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:03.423661    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:03.423671    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:05.423907    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 17
	I0917 02:44:05.423922    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:05.423987    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:05.424992    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:44:05.425061    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:05.425073    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:05.425083    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:05.425089    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:05.425095    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:05.425103    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:05.425110    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:05.425117    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:05.425130    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:05.425141    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:05.425157    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:05.425167    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:05.425174    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:05.425181    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:05.425188    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:05.425194    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:05.425205    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:05.425228    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:05.425237    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:07.427225    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 18
	I0917 02:44:07.427238    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:07.427281    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:07.428237    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:44:07.428290    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:07.428299    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:07.428313    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:07.428321    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:07.428328    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:07.428340    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:07.428356    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:07.428367    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:07.428382    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:07.428391    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:07.428402    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:07.428413    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:07.428420    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:07.428425    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:07.428432    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:07.428441    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:07.428458    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:07.428465    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:07.428478    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:09.429378    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 19
	I0917 02:44:09.429394    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:09.429464    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:09.430243    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:44:09.430305    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:09.430315    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:09.430324    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:09.430335    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:09.430348    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:09.430365    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:09.430374    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:09.430381    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:09.430407    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:09.430418    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:09.430426    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:09.430433    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:09.430443    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:09.430450    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:09.430466    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:09.430481    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:09.430489    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:09.430496    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:09.430514    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:11.430994    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 20
	I0917 02:44:11.431008    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:11.431072    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:11.431876    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:44:11.431933    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:11.431943    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:11.431955    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:11.431964    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:11.431973    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:11.431982    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:11.431999    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:11.432010    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:11.432035    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:11.432049    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:11.432056    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:11.432064    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:11.432071    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:11.432077    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:11.432095    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:11.432106    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:11.432114    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:11.432120    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:11.432135    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:13.433318    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 21
	I0917 02:44:13.433333    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:13.433411    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:13.434310    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:44:13.434369    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:13.434379    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:13.434418    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:13.434444    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:13.434452    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:13.434460    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:13.434469    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:13.434476    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:13.434483    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:13.434491    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:13.434503    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:13.434518    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:13.434527    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:13.434534    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:13.434554    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:13.434567    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:13.434581    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:13.434589    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:13.434597    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:15.436580    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 22
	I0917 02:44:15.436592    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:15.436643    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:15.437455    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:44:15.437486    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:15.437493    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:15.437501    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:15.437507    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:15.437520    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:15.437532    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:15.437540    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:15.437549    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:15.437565    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:15.437575    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:15.437582    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:15.437590    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:15.437597    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:15.437605    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:15.437611    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:15.437619    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:15.437625    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:15.437633    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:15.437649    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:17.439090    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 23
	I0917 02:44:17.439105    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:17.439150    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:17.440035    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:44:17.440073    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:17.440084    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:17.440093    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:17.440099    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:17.440113    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:17.440134    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:17.440147    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:17.440165    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:17.440175    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:17.440182    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:17.440188    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:17.440201    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:17.440209    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:17.440217    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:17.440225    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:17.440230    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:17.440237    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:17.440244    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:17.440252    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:19.442285    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 24
	I0917 02:44:19.442299    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:19.442359    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:19.443159    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:44:19.443214    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:19.443224    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:19.443232    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:19.443238    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:19.443245    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:19.443252    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:19.443269    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:19.443277    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:19.443292    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:19.443304    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:19.443311    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:19.443318    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:19.443327    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:19.443333    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:19.443344    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:19.443357    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:19.443365    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:19.443370    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:19.443393    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:21.443742    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 25
	I0917 02:44:21.443757    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:21.443821    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:21.444816    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:44:21.444878    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:21.444894    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:21.444908    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:21.444919    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:21.444931    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:21.444938    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:21.444945    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:21.444956    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:21.444961    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:21.444967    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:21.444972    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:21.444977    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:21.444987    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:21.444995    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:21.445001    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:21.445006    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:21.445018    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:21.445031    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:21.445039    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:23.447052    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 26
	I0917 02:44:23.447066    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:23.447124    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:23.447916    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:44:23.447964    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:23.447976    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:23.447992    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:23.448003    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:23.448010    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:23.448017    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:23.448033    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:23.448044    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:23.448051    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:23.448057    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:23.448063    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:23.448069    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:23.448084    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:23.448091    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:23.448099    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:23.448105    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:23.448111    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:23.448119    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:23.448127    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:25.450147    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 27
	I0917 02:44:25.450160    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:25.450218    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:25.451009    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:44:25.451050    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:25.451058    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:25.451068    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:25.451078    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:25.451089    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:25.451098    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:25.451113    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:25.451124    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:25.451132    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:25.451140    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:25.451151    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:25.451159    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:25.451177    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:25.451190    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:25.451199    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:25.451210    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:25.451218    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:25.451227    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:25.451234    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:27.452805    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 28
	I0917 02:44:27.452819    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:27.452884    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:27.453847    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:44:27.453898    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:27.453909    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:27.453918    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:27.453925    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:27.453941    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:27.453946    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:27.453952    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:27.453962    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:27.453969    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:27.453976    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:27.453994    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:27.454005    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:27.454013    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:27.454020    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:27.454030    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:27.454037    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:27.454044    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:27.454052    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:27.454066    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:29.456113    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 29
	I0917 02:44:29.456128    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:29.456193    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:29.457033    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 5e:68:bd:79:a7:32 in /var/db/dhcpd_leases ...
	I0917 02:44:29.457078    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:29.457088    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:29.457101    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:29.457109    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:29.457128    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:29.457137    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:29.457145    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:29.457151    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:29.457158    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:29.457165    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:29.457170    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:29.457182    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:29.457195    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:29.457206    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:29.457224    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:29.457232    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:29.457239    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:29.457245    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:29.457253    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:31.459367    6657 client.go:171] duration metric: took 1m0.811307159s to LocalClient.Create
	I0917 02:44:33.460699    6657 start.go:128] duration metric: took 1m2.843926697s to createHost
	I0917 02:44:33.460718    6657 start.go:83] releasing machines lock for "docker-flags-802000", held for 1m2.844038161s
	W0917 02:44:33.460732    6657 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 5e:68:bd:79:a7:32
	I0917 02:44:33.461056    6657 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:44:33.461074    6657 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:44:33.469570    6657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54111
	I0917 02:44:33.469925    6657 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:44:33.470238    6657 main.go:141] libmachine: Using API Version  1
	I0917 02:44:33.470248    6657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:44:33.470509    6657 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:44:33.470903    6657 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:44:33.470921    6657 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:44:33.479205    6657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54113
	I0917 02:44:33.479524    6657 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:44:33.479828    6657 main.go:141] libmachine: Using API Version  1
	I0917 02:44:33.479836    6657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:44:33.480085    6657 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:44:33.480215    6657 main.go:141] libmachine: (docker-flags-802000) Calling .GetState
	I0917 02:44:33.480338    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:33.480397    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:33.481383    6657 main.go:141] libmachine: (docker-flags-802000) Calling .DriverName
	I0917 02:44:33.544188    6657 out.go:177] * Deleting "docker-flags-802000" in hyperkit ...
	I0917 02:44:33.565447    6657 main.go:141] libmachine: (docker-flags-802000) Calling .Remove
	I0917 02:44:33.565570    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:33.565586    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:33.565667    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:33.566621    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:33.566687    6657 main.go:141] libmachine: (docker-flags-802000) DBG | waiting for graceful shutdown
	I0917 02:44:34.568737    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:34.568818    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:34.569775    6657 main.go:141] libmachine: (docker-flags-802000) DBG | waiting for graceful shutdown
	I0917 02:44:35.570099    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:35.570218    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:35.571839    6657 main.go:141] libmachine: (docker-flags-802000) DBG | waiting for graceful shutdown
	I0917 02:44:36.573205    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:36.573275    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:36.573909    6657 main.go:141] libmachine: (docker-flags-802000) DBG | waiting for graceful shutdown
	I0917 02:44:37.575527    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:37.575638    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:37.576394    6657 main.go:141] libmachine: (docker-flags-802000) DBG | waiting for graceful shutdown
	I0917 02:44:38.577403    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:38.577480    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6684
	I0917 02:44:38.578488    6657 main.go:141] libmachine: (docker-flags-802000) DBG | sending sigkill
	I0917 02:44:38.578497    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:38.588073    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:44:38 WARN : hyperkit: failed to read stdout: EOF
	I0917 02:44:38.588089    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:44:38 WARN : hyperkit: failed to read stderr: EOF
	W0917 02:44:38.605356    6657 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 5e:68:bd:79:a7:32
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 5e:68:bd:79:a7:32
	I0917 02:44:38.605373    6657 start.go:729] Will try again in 5 seconds ...
	I0917 02:44:43.605885    6657 start.go:360] acquireMachinesLock for docker-flags-802000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:45:36.367994    6657 start.go:364] duration metric: took 52.761836291s to acquireMachinesLock for "docker-flags-802000"
	I0917 02:45:36.368022    6657 start.go:93] Provisioning new machine with config: &{Name:docker-flags-802000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-802000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:45:36.368093    6657 start.go:125] createHost starting for "" (driver="hyperkit")
	I0917 02:45:36.389577    6657 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 02:45:36.389667    6657 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:45:36.389704    6657 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:45:36.398614    6657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54117
	I0917 02:45:36.399147    6657 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:45:36.399580    6657 main.go:141] libmachine: Using API Version  1
	I0917 02:45:36.399590    6657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:45:36.400025    6657 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:45:36.400197    6657 main.go:141] libmachine: (docker-flags-802000) Calling .GetMachineName
	I0917 02:45:36.400308    6657 main.go:141] libmachine: (docker-flags-802000) Calling .DriverName
	I0917 02:45:36.400442    6657 start.go:159] libmachine.API.Create for "docker-flags-802000" (driver="hyperkit")
	I0917 02:45:36.400474    6657 client.go:168] LocalClient.Create starting
	I0917 02:45:36.400521    6657 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem
	I0917 02:45:36.400578    6657 main.go:141] libmachine: Decoding PEM data...
	I0917 02:45:36.400590    6657 main.go:141] libmachine: Parsing certificate...
	I0917 02:45:36.400634    6657 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem
	I0917 02:45:36.400676    6657 main.go:141] libmachine: Decoding PEM data...
	I0917 02:45:36.400686    6657 main.go:141] libmachine: Parsing certificate...
	I0917 02:45:36.400699    6657 main.go:141] libmachine: Running pre-create checks...
	I0917 02:45:36.400704    6657 main.go:141] libmachine: (docker-flags-802000) Calling .PreCreateCheck
	I0917 02:45:36.400802    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:36.400830    6657 main.go:141] libmachine: (docker-flags-802000) Calling .GetConfigRaw
	I0917 02:45:36.430517    6657 main.go:141] libmachine: Creating machine...
	I0917 02:45:36.430526    6657 main.go:141] libmachine: (docker-flags-802000) Calling .Create
	I0917 02:45:36.430635    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:36.430759    6657 main.go:141] libmachine: (docker-flags-802000) DBG | I0917 02:45:36.430627    6714 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:45:36.430823    6657 main.go:141] libmachine: (docker-flags-802000) Downloading /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1025/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0917 02:45:36.843674    6657 main.go:141] libmachine: (docker-flags-802000) DBG | I0917 02:45:36.843579    6714 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/id_rsa...
	I0917 02:45:37.027287    6657 main.go:141] libmachine: (docker-flags-802000) DBG | I0917 02:45:37.027235    6714 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/docker-flags-802000.rawdisk...
	I0917 02:45:37.027302    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Writing magic tar header
	I0917 02:45:37.027318    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Writing SSH key tar header
	I0917 02:45:37.027725    6657 main.go:141] libmachine: (docker-flags-802000) DBG | I0917 02:45:37.027682    6714 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000 ...
	I0917 02:45:37.403039    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:37.403059    6657 main.go:141] libmachine: (docker-flags-802000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/hyperkit.pid
	I0917 02:45:37.403119    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Using UUID a7f14c05-af82-45ce-8a76-0e8990275948
	I0917 02:45:37.429603    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Generated MAC 4a:57:d:f0:7a:95
	I0917 02:45:37.429633    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-802000
	I0917 02:45:37.429698    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a7f14c05-af82-45ce-8a76-0e8990275948", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0917 02:45:37.429735    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a7f14c05-af82-45ce-8a76-0e8990275948", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0917 02:45:37.429778    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a7f14c05-af82-45ce-8a76-0e8990275948", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/docker-flags-802000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/bzimage,/Users/jenkins/m
inikube-integration/19648-1025/.minikube/machines/docker-flags-802000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-802000"}
	I0917 02:45:37.429821    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a7f14c05-af82-45ce-8a76-0e8990275948 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/docker-flags-802000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags
-802000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-802000"
	I0917 02:45:37.429830    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:45:37.432718    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 DEBUG: hyperkit: Pid is 6728
	I0917 02:45:37.433161    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 0
	I0917 02:45:37.433176    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:37.433255    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:45:37.434252    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:45:37.434302    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:37.434316    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:37.434324    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:37.434331    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:37.434346    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:37.434380    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:37.434406    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:37.434425    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:37.434436    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:37.434445    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:37.434472    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:37.434486    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:37.434502    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:37.434517    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:37.434529    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:37.434540    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:37.434548    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:37.434554    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:37.434571    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:37.440661    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:45:37.448686    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/docker-flags-802000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:45:37.449625    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:45:37.449642    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:45:37.449649    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:45:37.449661    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:45:37.829969    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:45:37.829989    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:45:37.944575    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:45:37.944594    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:45:37.944606    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:45:37.944615    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:45:37.945518    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:45:37.945531    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:37 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:45:39.434811    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 1
	I0917 02:45:39.434825    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:39.434962    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:45:39.435742    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:45:39.435801    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:39.435809    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:39.435817    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:39.435822    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:39.435832    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:39.435841    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:39.435848    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:39.435853    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:39.435867    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:39.435876    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:39.435884    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:39.435890    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:39.435895    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:39.435903    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:39.435917    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:39.435929    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:39.435947    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:39.435961    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:39.435974    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:41.436861    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 2
	I0917 02:45:41.436876    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:41.436987    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:45:41.437849    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:45:41.437919    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:41.437931    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:41.437940    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:41.437945    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:41.437951    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:41.437959    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:41.437973    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:41.437990    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:41.438012    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:41.438023    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:41.438030    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:41.438043    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:41.438057    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:41.438064    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:41.438071    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:41.438077    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:41.438082    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:41.438088    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:41.438093    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:43.376825    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:43 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 02:45:43.376926    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:43 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 02:45:43.376933    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:43 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 02:45:43.397051    6657 main.go:141] libmachine: (docker-flags-802000) DBG | 2024/09/17 02:45:43 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 02:45:43.439904    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 3
	I0917 02:45:43.439931    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:43.440108    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:45:43.441577    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:45:43.441696    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:43.441716    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:43.441734    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:43.441747    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:43.441762    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:43.441784    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:43.441801    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:43.441811    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:43.441848    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:43.441866    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:43.441877    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:43.441902    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:43.441911    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:43.441920    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:43.441931    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:43.441940    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:43.441949    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:43.441976    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:43.441993    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:45.443535    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 4
	I0917 02:45:45.443552    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:45.443656    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:45:45.444457    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:45:45.444522    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:45.444531    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:45.444550    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:45.444572    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:45.444599    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:45.444619    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:45.444634    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:45.444648    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:45.444670    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:45.444679    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:45.444686    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:45.444693    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:45.444699    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:45.444719    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:45.444731    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:45.444744    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:45.444751    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:45.444758    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:45.444766    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:47.446773    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 5
	I0917 02:45:47.446789    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:47.446834    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:45:47.447629    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:45:47.447682    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:47.447692    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:47.447704    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:47.447711    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:47.447727    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:47.447733    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:47.447739    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:47.447745    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:47.447751    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:47.447759    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:47.447766    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:47.447771    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:47.447779    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:47.447794    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:47.447805    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:47.447814    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:47.447827    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:47.447843    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:47.447856    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:49.449890    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 6
	I0917 02:45:49.449906    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:49.449928    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:45:49.450713    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:45:49.450743    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:49.450757    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:49.450767    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:49.450778    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:49.450785    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:49.450791    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:49.450796    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:49.450802    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:49.450810    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:49.450817    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:49.450822    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:49.450828    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:49.450836    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:49.450845    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:49.450853    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:49.450864    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:49.450872    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:49.450879    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:49.450885    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:51.452137    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 7
	I0917 02:45:51.452152    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:51.452222    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:45:51.453094    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:45:51.453132    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:51.453149    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:51.453168    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:51.453178    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:51.453191    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:51.453203    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:51.453226    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:51.453236    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:51.453257    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:51.453271    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:51.453279    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:51.453285    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:51.453300    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:51.453313    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:51.453321    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:51.453326    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:51.453341    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:51.453357    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:51.453366    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:53.454190    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 8
	I0917 02:45:53.454203    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:53.454269    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:45:53.455073    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:45:53.455117    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:53.455128    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:53.455141    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:53.455148    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:53.455154    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:53.455161    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:53.455176    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:53.455185    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:53.455191    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:53.455197    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:53.455212    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:53.455220    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:53.455229    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:53.455246    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:53.455259    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:53.455266    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:53.455274    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:53.455281    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:53.455294    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:55.456836    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 9
	I0917 02:45:55.456850    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:55.456919    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:45:55.457692    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:45:55.457766    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:55.457776    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:55.457784    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:55.457790    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:55.457796    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:55.457806    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:55.457822    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:55.457829    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:55.457835    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:55.457840    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:55.457860    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:55.457870    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:55.457877    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:55.457886    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:55.457897    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:55.457907    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:55.457914    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:55.457921    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:55.457939    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:57.459978    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 10
	I0917 02:45:57.459994    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:57.460049    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:45:57.460843    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:45:57.460889    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:57.460899    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:57.460906    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:57.460912    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:57.460921    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:57.460927    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:57.460948    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:57.460956    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:57.460962    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:57.460970    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:57.460984    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:57.460996    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:57.461003    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:57.461009    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:57.461027    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:57.461040    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:57.461048    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:57.461056    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:57.461064    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:59.463105    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 11
	I0917 02:45:59.463120    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:59.463164    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:45:59.464014    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:45:59.464073    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:59.464085    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:59.464096    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:59.464103    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:59.464112    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:59.464120    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:59.464127    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:59.464134    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:59.464155    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:59.464167    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:59.464177    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:59.464195    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:59.464207    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:59.464215    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:59.464223    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:59.464230    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:59.464238    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:59.464253    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:59.464261    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:01.466292    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 12
	I0917 02:46:01.466306    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:46:01.466361    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:46:01.467461    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:46:01.467515    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:46:01.467526    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:46:01.467534    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:46:01.467541    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:46:01.467547    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:46:01.467555    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:46:01.467562    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:46:01.467568    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:46:01.467583    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:46:01.467594    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:46:01.467600    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:46:01.467608    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:46:01.467616    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:46:01.467622    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:46:01.467638    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:46:01.467659    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:46:01.467666    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:46:01.467673    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:46:01.467682    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:03.469704    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 13
	I0917 02:46:03.469718    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:46:03.469829    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:46:03.470609    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:46:03.470653    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:46:03.470668    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:46:03.470689    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:46:03.470698    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:46:03.470707    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:46:03.470712    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:46:03.470719    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:46:03.470740    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:46:03.470748    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:46:03.470759    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:46:03.470765    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:46:03.470771    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:46:03.470783    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:46:03.470791    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:46:03.470808    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:46:03.470816    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:46:03.470825    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:46:03.470832    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:46:03.470841    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:05.472833    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 14
	I0917 02:46:05.472844    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:46:05.472907    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:46:05.473943    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:46:05.473996    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:46:05.474007    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:46:05.474015    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:46:05.474023    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:46:05.474034    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:46:05.474044    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:46:05.474056    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:46:05.474061    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:46:05.474077    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:46:05.474092    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:46:05.474099    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:46:05.474111    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:46:05.474120    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:46:05.474128    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:46:05.474135    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:46:05.474142    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:46:05.474149    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:46:05.474156    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:46:05.474168    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:07.475384    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 15
	I0917 02:46:07.475399    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:46:07.475453    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:46:07.476232    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:46:07.476273    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:46:07.476284    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:46:07.476298    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:46:07.476305    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:46:07.476320    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:46:07.476332    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:46:07.476352    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:46:07.476364    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:46:07.476406    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:46:07.476428    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:46:07.476438    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:46:07.476444    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:46:07.476453    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:46:07.476461    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:46:07.476468    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:46:07.476473    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:46:07.476482    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:46:07.476490    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:46:07.476498    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:09.476420    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 16
	I0917 02:46:09.476434    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:46:09.476553    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:46:09.477369    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:46:09.477448    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:46:09.477483    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:46:09.477491    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:46:09.477497    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:46:09.477504    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:46:09.477515    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:46:09.477523    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:46:09.477528    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:46:09.477534    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:46:09.477541    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:46:09.477560    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:46:09.477570    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:46:09.477579    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:46:09.477586    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:46:09.477593    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:46:09.477602    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:46:09.477609    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:46:09.477625    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:46:09.477639    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:11.478145    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 17
	I0917 02:46:11.478161    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:46:11.478268    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:46:11.479052    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:46:11.479111    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:46:11.479118    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:46:11.479129    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:46:11.479135    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:46:11.479142    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:46:11.479148    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:46:11.479153    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:46:11.479188    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:46:11.479202    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:46:11.479210    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:46:11.479216    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:46:11.479225    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:46:11.479249    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:46:11.479261    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:46:11.479281    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:46:11.479289    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:46:11.479296    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:46:11.479303    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:46:11.479317    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:13.481311    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 18
	I0917 02:46:13.481323    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:46:13.481368    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:46:13.482127    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:46:13.482182    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:46:13.482193    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:46:13.482201    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:46:13.482206    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:46:13.482212    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:46:13.482217    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:46:13.482223    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:46:13.482229    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:46:13.482246    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:46:13.482257    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:46:13.482269    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:46:13.482278    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:46:13.482286    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:46:13.482295    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:46:13.482309    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:46:13.482322    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:46:13.482330    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:46:13.482344    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:46:13.482351    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:15.484369    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 19
	I0917 02:46:15.484384    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:46:15.484446    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:46:15.485271    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:46:15.485313    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:46:15.485322    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:46:15.485346    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:46:15.485358    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:46:15.485367    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:46:15.485373    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:46:15.485379    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:46:15.485385    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:46:15.485392    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:46:15.485407    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:46:15.485430    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:46:15.485437    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:46:15.485446    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:46:15.485454    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:46:15.485466    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:46:15.485482    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:46:15.485490    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:46:15.485497    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:46:15.485506    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:17.485942    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 20
	I0917 02:46:17.485954    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:46:17.486011    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:46:17.486781    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:46:17.486842    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:46:17.486851    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:46:17.486857    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:46:17.486868    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:46:17.486876    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:46:17.486881    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:46:17.486887    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:46:17.486894    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:46:17.486900    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:46:17.486909    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:46:17.486916    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:46:17.486926    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:46:17.486933    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:46:17.486941    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:46:17.486946    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:46:17.486952    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:46:17.486959    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:46:17.486966    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:46:17.486974    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:19.488575    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 21
	I0917 02:46:19.488591    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:46:19.488637    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:46:19.489468    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:46:19.489517    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:46:19.489528    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:46:19.489538    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:46:19.489546    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:46:19.489553    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:46:19.489558    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:46:19.489564    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:46:19.489571    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:46:19.489576    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:46:19.489583    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:46:19.489592    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:46:19.489609    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:46:19.489617    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:46:19.489623    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:46:19.489629    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:46:19.489635    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:46:19.489643    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:46:19.489657    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:46:19.489670    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:21.490981    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 22
	I0917 02:46:21.490998    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:46:21.491044    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:46:21.491814    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:46:21.491861    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:46:21.491879    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:46:21.491904    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:46:21.491917    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:46:21.491926    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:46:21.491933    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:46:21.491939    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:46:21.491945    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:46:21.491951    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:46:21.491957    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:46:21.491979    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:46:21.491990    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:46:21.492014    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:46:21.492026    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:46:21.492039    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:46:21.492049    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:46:21.492057    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:46:21.492068    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:46:21.492076    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:23.494080    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 23
	I0917 02:46:23.494094    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:46:23.494155    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:46:23.494942    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:46:23.495002    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:46:23.495016    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:46:23.495028    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:46:23.495037    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:46:23.495043    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:46:23.495049    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:46:23.495064    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:46:23.495079    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:46:23.495089    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:46:23.495100    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:46:23.495110    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:46:23.495122    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:46:23.495133    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:46:23.495158    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:46:23.495177    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:46:23.495186    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:46:23.495200    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:46:23.495211    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:46:23.495220    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:25.496172    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 24
	I0917 02:46:25.496186    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:46:25.496248    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:46:25.497029    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:46:25.497089    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:46:25.497102    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:46:25.497118    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:46:25.497127    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:46:25.497134    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:46:25.497143    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:46:25.497156    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:46:25.497164    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:46:25.497175    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:46:25.497182    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:46:25.497189    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:46:25.497197    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:46:25.497203    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:46:25.497209    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:46:25.497223    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:46:25.497236    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:46:25.497247    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:46:25.497252    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:46:25.497267    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:27.498112    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 25
	I0917 02:46:27.498124    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:46:27.498193    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:46:27.498969    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:46:27.499018    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:46:27.499030    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:46:27.499040    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:46:27.499048    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:46:27.499056    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:46:27.499077    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:46:27.499097    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:46:27.499109    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:46:27.499117    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:46:27.499124    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:46:27.499145    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:46:27.499177    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:46:27.499192    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:46:27.499200    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:46:27.499215    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:46:27.499235    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:46:27.499250    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:46:27.499263    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:46:27.499271    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:29.500961    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 26
	I0917 02:46:29.500972    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:46:29.501046    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:46:29.501844    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:46:29.501894    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:46:29.501908    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:46:29.501923    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:46:29.501931    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:46:29.501937    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:46:29.501942    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:46:29.501975    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:46:29.501993    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:46:29.502004    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:46:29.502010    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:46:29.502021    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:46:29.502028    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:46:29.502036    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:46:29.502043    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:46:29.502050    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:46:29.502075    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:46:29.502088    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:46:29.502096    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:46:29.502102    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:31.504041    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 27
	I0917 02:46:31.504053    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:46:31.504122    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:46:31.504901    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:46:31.504959    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:46:31.504969    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:46:31.504977    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:46:31.504983    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:46:31.504992    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:46:31.504998    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:46:31.505007    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:46:31.505013    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:46:31.505018    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:46:31.505026    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:46:31.505032    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:46:31.505038    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:46:31.505044    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:46:31.505052    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:46:31.505059    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:46:31.505066    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:46:31.505073    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:46:31.505082    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:46:31.505088    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:33.506601    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 28
	I0917 02:46:33.506646    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:46:33.506674    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:46:33.507720    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:46:33.507759    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:46:33.507771    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:46:33.507781    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:46:33.507789    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:46:33.507796    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:46:33.507802    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:46:33.507807    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:46:33.507813    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:46:33.507819    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:46:33.507854    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:46:33.507869    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:46:33.507876    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:46:33.507885    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:46:33.507893    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:46:33.507903    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:46:33.507911    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:46:33.507918    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:46:33.507924    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:46:33.507930    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:35.509161    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Attempt 29
	I0917 02:46:35.509174    6657 main.go:141] libmachine: (docker-flags-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:46:35.509296    6657 main.go:141] libmachine: (docker-flags-802000) DBG | hyperkit pid from json: 6728
	I0917 02:46:35.510065    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Searching for 4a:57:d:f0:7a:95 in /var/db/dhcpd_leases ...
	I0917 02:46:35.510120    6657 main.go:141] libmachine: (docker-flags-802000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:46:35.510131    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:46:35.510140    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:46:35.510146    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:46:35.510152    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:46:35.510158    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:46:35.510164    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:46:35.510184    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:46:35.510210    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:46:35.510217    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:46:35.510252    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:46:35.510260    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:46:35.510291    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:46:35.510300    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:46:35.510333    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:46:35.510350    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:46:35.510357    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:46:35.510364    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:46:35.510372    6657 main.go:141] libmachine: (docker-flags-802000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:46:37.512365    6657 client.go:171] duration metric: took 1m1.111605216s to LocalClient.Create
	I0917 02:46:39.512997    6657 start.go:128] duration metric: took 1m3.144608082s to createHost
	I0917 02:46:39.513009    6657 start.go:83] releasing machines lock for "docker-flags-802000", held for 1m3.144715965s
	W0917 02:46:39.513075    6657 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p docker-flags-802000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4a:57:d:f0:7a:95
	* Failed to start hyperkit VM. Running "minikube delete -p docker-flags-802000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4a:57:d:f0:7a:95
	I0917 02:46:39.576244    6657 out.go:201] 
	W0917 02:46:39.597086    6657 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4a:57:d:f0:7a:95
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4a:57:d:f0:7a:95
	W0917 02:46:39.597099    6657 out.go:270] * 
	* 
	W0917 02:46:39.597779    6657 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:46:39.660082    6657 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-802000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-802000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-802000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 50 (181.573804ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-802000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-802000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 50
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-802000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-802000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 50 (166.912548ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-802000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-802000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 50
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-802000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-17 02:46:40.120284 -0700 PDT m=+4146.239474846
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-802000 -n docker-flags-802000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-802000 -n docker-flags-802000: exit status 7 (84.402645ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 02:46:40.202619    6761 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0917 02:46:40.202641    6761 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-802000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "docker-flags-802000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-802000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-802000: (5.260507378s)
--- FAIL: TestDockerFlags (252.21s)

                                                
                                    
x
+
TestForceSystemdFlag (252.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-972000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
E0917 02:41:35.848229    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-972000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.56751789s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-972000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-flag-972000" primary control-plane node in "force-systemd-flag-972000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-flag-972000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:41:30.041785    6614 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:41:30.042033    6614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:41:30.042038    6614 out.go:358] Setting ErrFile to fd 2...
	I0917 02:41:30.042042    6614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:41:30.042207    6614 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:41:30.043714    6614 out.go:352] Setting JSON to false
	I0917 02:41:30.066497    6614 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4260,"bootTime":1726561830,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 02:41:30.066647    6614 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:41:30.088570    6614 out.go:177] * [force-systemd-flag-972000] minikube v1.34.0 on Darwin 14.6.1
	I0917 02:41:30.129484    6614 notify.go:220] Checking for updates...
	I0917 02:41:30.150520    6614 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:41:30.171472    6614 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:41:30.192288    6614 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 02:41:30.213505    6614 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:41:30.234583    6614 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:41:30.255301    6614 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:41:30.275897    6614 config.go:182] Loaded profile config "force-systemd-env-601000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:41:30.275991    6614 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:41:30.304546    6614 out.go:177] * Using the hyperkit driver based on user configuration
	I0917 02:41:30.345504    6614 start.go:297] selected driver: hyperkit
	I0917 02:41:30.345520    6614 start.go:901] validating driver "hyperkit" against <nil>
	I0917 02:41:30.345530    6614 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:41:30.348642    6614 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:41:30.348775    6614 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19648-1025/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 02:41:30.357293    6614 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 02:41:30.361301    6614 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:41:30.361318    6614 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 02:41:30.361349    6614 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:41:30.361574    6614 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 02:41:30.361604    6614 cni.go:84] Creating CNI manager for ""
	I0917 02:41:30.361640    6614 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:41:30.361653    6614 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 02:41:30.361720    6614 start.go:340] cluster config:
	{Name:force-systemd-flag-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:41:30.361810    6614 iso.go:125] acquiring lock: {Name:mkc407d80a0f8e78f0e24d63c464bb315c80139b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:41:30.403272    6614 out.go:177] * Starting "force-systemd-flag-972000" primary control-plane node in "force-systemd-flag-972000" cluster
	I0917 02:41:30.425454    6614 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:41:30.425486    6614 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 02:41:30.425500    6614 cache.go:56] Caching tarball of preloaded images
	I0917 02:41:30.425617    6614 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:41:30.425627    6614 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:41:30.425703    6614 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/force-systemd-flag-972000/config.json ...
	I0917 02:41:30.425719    6614 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/force-systemd-flag-972000/config.json: {Name:mk015a31549d7b44d9a6e66717d6b77168e09a88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:41:30.426026    6614 start.go:360] acquireMachinesLock for force-systemd-flag-972000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:42:27.473122    6614 start.go:364] duration metric: took 57.046806165s to acquireMachinesLock for "force-systemd-flag-972000"
	I0917 02:42:27.473178    6614 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:42:27.473248    6614 start.go:125] createHost starting for "" (driver="hyperkit")
	I0917 02:42:27.515378    6614 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 02:42:27.515539    6614 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:42:27.515576    6614 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:42:27.524240    6614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54089
	I0917 02:42:27.524611    6614 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:42:27.525021    6614 main.go:141] libmachine: Using API Version  1
	I0917 02:42:27.525032    6614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:42:27.525303    6614 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:42:27.525446    6614 main.go:141] libmachine: (force-systemd-flag-972000) Calling .GetMachineName
	I0917 02:42:27.525550    6614 main.go:141] libmachine: (force-systemd-flag-972000) Calling .DriverName
	I0917 02:42:27.525661    6614 start.go:159] libmachine.API.Create for "force-systemd-flag-972000" (driver="hyperkit")
	I0917 02:42:27.525690    6614 client.go:168] LocalClient.Create starting
	I0917 02:42:27.525723    6614 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem
	I0917 02:42:27.525772    6614 main.go:141] libmachine: Decoding PEM data...
	I0917 02:42:27.525786    6614 main.go:141] libmachine: Parsing certificate...
	I0917 02:42:27.525854    6614 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem
	I0917 02:42:27.525892    6614 main.go:141] libmachine: Decoding PEM data...
	I0917 02:42:27.525904    6614 main.go:141] libmachine: Parsing certificate...
	I0917 02:42:27.525918    6614 main.go:141] libmachine: Running pre-create checks...
	I0917 02:42:27.525927    6614 main.go:141] libmachine: (force-systemd-flag-972000) Calling .PreCreateCheck
	I0917 02:42:27.526012    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:27.526172    6614 main.go:141] libmachine: (force-systemd-flag-972000) Calling .GetConfigRaw
	I0917 02:42:27.536651    6614 main.go:141] libmachine: Creating machine...
	I0917 02:42:27.536661    6614 main.go:141] libmachine: (force-systemd-flag-972000) Calling .Create
	I0917 02:42:27.536755    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:27.536871    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | I0917 02:42:27.536738    6638 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:42:27.536944    6614 main.go:141] libmachine: (force-systemd-flag-972000) Downloading /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1025/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0917 02:42:27.959591    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | I0917 02:42:27.959518    6638 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/id_rsa...
	I0917 02:42:28.056783    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | I0917 02:42:28.056723    6638 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/force-systemd-flag-972000.rawdisk...
	I0917 02:42:28.056806    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Writing magic tar header
	I0917 02:42:28.056823    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Writing SSH key tar header
	I0917 02:42:28.057168    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | I0917 02:42:28.057138    6638 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000 ...
	I0917 02:42:28.512228    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:28.512246    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/hyperkit.pid
	I0917 02:42:28.512261    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Using UUID 772bcd6e-72c3-49c1-afe5-ad346c2f10d7
	I0917 02:42:28.539457    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Generated MAC b2:31:92:e0:55:28
	I0917 02:42:28.539472    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-972000
	I0917 02:42:28.539499    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:28 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"772bcd6e-72c3-49c1-afe5-ad346c2f10d7", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:42:28.539528    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:28 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"772bcd6e-72c3-49c1-afe5-ad346c2f10d7", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:42:28.539600    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:28 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "772bcd6e-72c3-49c1-afe5-ad346c2f10d7", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/force-systemd-flag-972000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/fo
rce-systemd-flag-972000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-972000"}
	I0917 02:42:28.539652    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:28 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 772bcd6e-72c3-49c1-afe5-ad346c2f10d7 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/force-systemd-flag-972000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/bzimage,/Users/jenkins/minikube-integr
ation/19648-1025/.minikube/machines/force-systemd-flag-972000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-972000"
	I0917 02:42:28.539667    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:28 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:42:28.542631    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:28 DEBUG: hyperkit: Pid is 6652
	I0917 02:42:28.543079    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 0
	I0917 02:42:28.543102    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:28.543145    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:42:28.544122    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:42:28.544176    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:28.544194    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:28.544232    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:28.544244    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:28.544251    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:28.544257    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:28.544269    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:28.544291    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:28.544333    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:28.544358    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:28.544370    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:28.544384    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:28.544410    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:28.544427    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:28.544442    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:28.544459    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:28.544468    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:28.544476    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:28.544491    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:28.550732    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:28 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:42:28.558826    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:28 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:42:28.559690    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:42:28.559706    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:42:28.559716    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:42:28.559729    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:42:28.938867    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:42:28.938883    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:42:29.054098    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:42:29.054126    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:42:29.054144    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:42:29.054163    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:42:29.054996    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:42:29.055006    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:42:30.546416    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 1
	I0917 02:42:30.546432    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:30.546506    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:42:30.547305    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:42:30.547369    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:30.547378    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:30.547384    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:30.547390    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:30.547417    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:30.547428    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:30.547438    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:30.547447    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:30.547460    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:30.547471    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:30.547479    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:30.547487    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:30.547494    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:30.547500    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:30.547510    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:30.547523    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:30.547531    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:30.547539    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:30.547546    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:32.547645    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 2
	I0917 02:42:32.547661    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:32.547721    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:42:32.548528    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:42:32.548584    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:32.548594    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:32.548601    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:32.548611    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:32.548619    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:32.548626    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:32.548645    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:32.548655    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:32.548662    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:32.548670    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:32.548676    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:32.548682    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:32.548689    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:32.548698    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:32.548705    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:32.548712    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:32.548720    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:32.548727    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:32.548735    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:34.464970    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:34 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 02:42:34.465129    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:34 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 02:42:34.465138    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:34 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 02:42:34.485013    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:42:34 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 02:42:34.549263    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 3
	I0917 02:42:34.549324    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:34.549446    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:42:34.550923    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:42:34.551031    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:34.551050    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:34.551068    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:34.551079    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:34.551096    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:34.551112    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:34.551147    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:34.551191    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:34.551214    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:34.551251    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:34.551263    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:34.551274    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:34.551317    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:34.551332    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:34.551341    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:34.551349    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:34.551365    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:34.551386    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:34.551399    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:36.552032    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 4
	I0917 02:42:36.552049    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:36.552116    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:42:36.552928    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:42:36.552988    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:36.552999    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:36.553009    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:36.553019    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:36.553033    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:36.553050    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:36.553060    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:36.553113    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:36.553121    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:36.553137    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:36.553146    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:36.553153    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:36.553159    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:36.553173    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:36.553185    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:36.553203    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:36.553212    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:36.553218    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:36.553240    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:38.553944    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 5
	I0917 02:42:38.553957    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:38.554011    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:42:38.554798    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:42:38.554837    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:38.554848    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:38.554866    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:38.554875    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:38.554882    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:38.554892    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:38.554898    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:38.554907    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:38.554913    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:38.554918    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:38.554935    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:38.554947    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:38.554954    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:38.554959    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:38.554973    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:38.554986    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:38.554993    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:38.555001    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:38.555009    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:40.555807    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 6
	I0917 02:42:40.555823    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:40.555885    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:42:40.556710    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:42:40.556751    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:40.556771    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:40.556782    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:40.556790    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:40.556797    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:40.556806    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:40.556815    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:40.556822    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:40.556829    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:40.556841    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:40.556848    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:40.556853    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:40.556879    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:40.556889    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:40.556896    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:40.556903    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:40.556921    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:40.556947    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:40.556965    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:42.558919    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 7
	I0917 02:42:42.558931    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:42.558972    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:42:42.559741    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:42:42.559809    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:42.559820    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:42.559827    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:42.559834    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:42.559855    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:42.559864    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:42.559881    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:42.559894    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:42.559901    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:42.559909    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:42.559925    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:42.559938    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:42.559945    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:42.559951    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:42.559957    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:42.559971    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:42.559985    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:42.559993    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:42.560006    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:44.560399    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 8
	I0917 02:42:44.560414    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:44.560496    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:42:44.561288    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:42:44.561316    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:44.561329    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:44.561342    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:44.561352    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:44.561377    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:44.561388    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:44.561398    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:44.561406    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:44.561412    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:44.561418    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:44.561437    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:44.561446    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:44.561453    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:44.561470    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:44.561479    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:44.561489    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:44.561504    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:44.561516    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:44.561526    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:46.561667    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 9
	I0917 02:42:46.561683    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:46.561746    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:42:46.562550    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:42:46.562597    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:46.562607    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:46.562616    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:46.562622    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:46.562634    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:46.562643    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:46.562667    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:46.562682    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:46.562690    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:46.562695    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:46.562703    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:46.562710    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:46.562716    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:46.562722    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:46.562734    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:46.562746    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:46.562754    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:46.562765    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:46.562775    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:48.564792    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 10
	I0917 02:42:48.564805    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:48.564863    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:42:48.565658    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:42:48.565718    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:48.565728    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:48.565735    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:48.565741    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:48.565749    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:48.565755    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:48.565762    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:48.565768    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:48.565775    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:48.565791    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:48.565805    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:48.565818    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:48.565824    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:48.565833    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:48.565840    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:48.565848    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:48.565854    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:48.565862    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:48.565871    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:50.566155    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 11
	I0917 02:42:50.566171    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:50.566224    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:42:50.566992    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:42:50.567046    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:50.567055    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:50.567062    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:50.567070    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:50.567079    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:50.567090    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:50.567105    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:50.567117    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:50.567124    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:50.567137    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:50.567146    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:50.567153    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:50.567158    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:50.567179    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:50.567190    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:50.567209    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:50.567221    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:50.567231    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:50.567239    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:52.568527    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 12
	I0917 02:42:52.568540    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:52.568652    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:42:52.569474    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:42:52.569490    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:52.569500    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:52.569506    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:52.569525    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:52.569533    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:52.569541    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:52.569553    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:52.569561    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:52.569567    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:52.569589    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:52.569599    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:52.569607    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:52.569614    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:52.569630    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:52.569642    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:52.569650    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:52.569657    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:52.569666    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:52.569673    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:54.571783    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 13
	I0917 02:42:54.571795    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:54.571833    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:42:54.572946    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:42:54.572989    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:54.573001    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:54.573015    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:54.573035    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:54.573056    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:54.573069    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:54.573090    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:54.573102    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:54.573115    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:54.573125    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:54.573131    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:54.573138    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:54.573146    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:54.573154    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:54.573161    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:54.573168    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:54.573180    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:54.573187    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:54.573195    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:56.575195    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 14
	I0917 02:42:56.575215    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:56.575275    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:42:56.576070    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:42:56.576137    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:56.576148    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:56.576163    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:56.576171    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:56.576177    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:56.576182    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:56.576188    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:56.576203    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:56.576213    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:56.576220    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:56.576228    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:56.576239    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:56.576251    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:56.576260    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:56.576274    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:56.576287    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:56.576299    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:56.576307    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:56.576314    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:58.576557    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 15
	I0917 02:42:58.576570    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:58.576627    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:42:58.577435    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:42:58.577448    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:58.577465    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:58.577472    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:58.577482    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:58.577490    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:58.577497    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:58.577503    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:58.577510    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:58.577515    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:58.577522    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:58.577531    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:58.577539    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:58.577548    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:58.577563    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:58.577571    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:58.577578    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:58.577583    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:58.577589    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:58.577595    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:00.578009    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 16
	I0917 02:43:00.578024    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:00.578091    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:00.578966    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:43:00.579002    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:00.579010    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:00.579026    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:00.579033    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:00.579039    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:00.579046    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:00.579052    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:00.579060    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:00.579069    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:00.579076    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:00.579083    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:00.579090    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:00.579097    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:00.579104    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:00.579109    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:00.579122    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:00.579134    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:00.579142    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:00.579149    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:02.580916    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 17
	I0917 02:43:02.580929    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:02.580995    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:02.581812    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:43:02.581871    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:02.581882    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:02.581890    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:02.581896    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:02.581926    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:02.581939    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:02.581947    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:02.581954    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:02.581961    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:02.581966    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:02.581972    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:02.581984    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:02.581992    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:02.581998    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:02.582009    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:02.582017    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:02.582025    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:02.582039    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:02.582051    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:04.583076    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 18
	I0917 02:43:04.583091    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:04.583182    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:04.583998    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:43:04.584049    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:04.584062    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:04.584081    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:04.584090    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:04.584102    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:04.584109    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:04.584117    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:04.584126    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:04.584133    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:04.584147    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:04.584154    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:04.584161    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:04.584167    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:04.584173    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:04.584185    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:04.584194    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:04.584202    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:04.584208    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:04.584223    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:06.585082    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 19
	I0917 02:43:06.585094    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:06.585157    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:06.585938    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:43:06.586002    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:06.586018    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:06.586033    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:06.586043    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:06.586051    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:06.586057    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:06.586065    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:06.586070    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:06.586077    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:06.586090    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:06.586097    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:06.586104    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:06.586118    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:06.586148    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:06.586181    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:06.586189    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:06.586195    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:06.586201    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:06.586209    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:08.587454    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 20
	I0917 02:43:08.587467    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:08.587533    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:08.588611    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:43:08.588675    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:08.588684    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:08.588693    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:08.588698    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:08.588714    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:08.588723    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:08.588730    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:08.588737    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:08.588755    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:08.588767    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:08.588775    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:08.588784    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:08.588793    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:08.588808    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:08.588825    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:08.588835    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:08.588849    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:08.588860    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:08.588870    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:10.589800    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 21
	I0917 02:43:10.589813    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:10.589881    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:10.590687    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:43:10.590724    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:10.590734    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:10.590747    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:10.590754    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:10.590781    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:10.590793    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:10.590810    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:10.590820    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:10.590828    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:10.590839    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:10.590849    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:10.590866    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:10.590879    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:10.590887    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:10.590895    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:10.590902    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:10.590909    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:10.590919    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:10.590928    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:12.592890    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 22
	I0917 02:43:12.592903    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:12.592949    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:12.593840    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:43:12.593888    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:12.593900    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:12.593928    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:12.593940    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:12.593952    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:12.593963    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:12.593983    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:12.593995    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:12.594004    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:12.594011    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:12.594032    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:12.594043    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:12.594057    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:12.594070    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:12.594080    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:12.594093    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:12.594099    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:12.594107    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:12.594116    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:14.596090    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 23
	I0917 02:43:14.596103    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:14.596163    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:14.596922    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:43:14.596988    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:14.596999    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:14.597008    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:14.597014    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:14.597021    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:14.597026    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:14.597033    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:14.597040    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:14.597063    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:14.597073    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:14.597083    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:14.597092    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:14.597099    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:14.597107    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:14.597113    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:14.597119    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:14.597133    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:14.597141    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:14.597150    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:16.598004    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 24
	I0917 02:43:16.598025    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:16.598066    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:16.598869    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:43:16.598881    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:16.598887    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:16.598894    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:16.598911    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:16.598917    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:16.598927    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:16.598933    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:16.598939    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:16.598947    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:16.598952    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:16.598958    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:16.598964    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:16.598982    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:16.598989    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:16.598995    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:16.599003    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:16.599019    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:16.599033    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:16.599041    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:18.601048    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 25
	I0917 02:43:18.601061    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:18.601116    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:18.601963    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:43:18.601985    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:18.601997    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:18.602006    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:18.602014    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:18.602035    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:18.602048    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:18.602055    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:18.602063    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:18.602069    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:18.602077    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:18.602086    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:18.602094    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:18.602108    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:18.602121    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:18.602129    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:18.602134    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:18.602141    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:18.602149    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:18.602157    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:20.602525    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 26
	I0917 02:43:20.602538    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:20.602640    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:20.603468    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:43:20.603477    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:20.603484    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:20.603489    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:20.603519    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:20.603534    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:20.603544    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:20.603552    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:20.603567    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:20.603579    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:20.603587    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:20.603594    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:20.603601    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:20.603608    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:20.603621    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:20.603629    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:20.603637    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:20.603645    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:20.603657    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:20.603666    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:22.605632    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 27
	I0917 02:43:22.605645    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:22.605701    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:22.606501    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:43:22.606554    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:22.606566    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:22.606576    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:22.606582    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:22.606589    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:22.606594    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:22.606601    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:22.606610    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:22.606631    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:22.606643    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:22.606660    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:22.606673    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:22.606681    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:22.606688    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:22.606695    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:22.606702    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:22.606711    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:22.606718    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:22.606734    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:24.608237    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 28
	I0917 02:43:24.608251    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:24.608317    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:24.609407    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:43:24.609437    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:24.609446    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:24.609454    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:24.609460    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:24.609466    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:24.609474    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:24.609482    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:24.609489    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:24.609496    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:24.609503    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:24.609508    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:24.609520    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:24.609532    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:24.609542    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:24.609551    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:24.609574    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:24.609587    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:24.609595    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:24.609603    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:26.611027    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 29
	I0917 02:43:26.611041    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:26.611110    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:26.611881    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for b2:31:92:e0:55:28 in /var/db/dhcpd_leases ...
	I0917 02:43:26.611939    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:43:26.611950    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:43:26.611979    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:43:26.611994    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:43:26.612008    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:43:26.612018    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:43:26.612033    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:43:26.612042    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:43:26.612048    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:43:26.612055    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:43:26.612062    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:43:26.612067    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:43:26.612095    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:43:26.612107    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:43:26.612122    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:43:26.612130    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:43:26.612143    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:43:26.612155    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:43:26.612215    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:43:28.614187    6614 client.go:171] duration metric: took 1m1.088209359s to LocalClient.Create
	I0917 02:43:30.616325    6614 start.go:128] duration metric: took 1m3.142773246s to createHost
	I0917 02:43:30.616340    6614 start.go:83] releasing machines lock for "force-systemd-flag-972000", held for 1m3.14291141s
	W0917 02:43:30.616354    6614 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b2:31:92:e0:55:28
	I0917 02:43:30.616662    6614 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:43:30.616681    6614 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:43:30.626071    6614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54105
	I0917 02:43:30.626585    6614 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:43:30.626983    6614 main.go:141] libmachine: Using API Version  1
	I0917 02:43:30.626994    6614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:43:30.627259    6614 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:43:30.627692    6614 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:43:30.627744    6614 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:43:30.636234    6614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54107
	I0917 02:43:30.636562    6614 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:43:30.636877    6614 main.go:141] libmachine: Using API Version  1
	I0917 02:43:30.636885    6614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:43:30.637121    6614 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:43:30.637264    6614 main.go:141] libmachine: (force-systemd-flag-972000) Calling .GetState
	I0917 02:43:30.637354    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:30.637430    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:30.638404    6614 main.go:141] libmachine: (force-systemd-flag-972000) Calling .DriverName
	I0917 02:43:30.679825    6614 out.go:177] * Deleting "force-systemd-flag-972000" in hyperkit ...
	I0917 02:43:30.721637    6614 main.go:141] libmachine: (force-systemd-flag-972000) Calling .Remove
	I0917 02:43:30.721759    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:30.721773    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:30.721829    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:30.722766    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:30.722836    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | waiting for graceful shutdown
	I0917 02:43:31.723307    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:31.723347    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:31.724256    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | waiting for graceful shutdown
	I0917 02:43:32.725079    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:32.725206    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:32.726879    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | waiting for graceful shutdown
	I0917 02:43:33.727410    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:33.727487    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:33.728235    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | waiting for graceful shutdown
	I0917 02:43:34.730175    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:34.730272    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:34.730874    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | waiting for graceful shutdown
	I0917 02:43:35.731363    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:35.731464    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6652
	I0917 02:43:35.732436    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | sending sigkill
	I0917 02:43:35.732446    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:43:35.742286    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:43:35 WARN : hyperkit: failed to read stdout: EOF
	I0917 02:43:35.742306    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:43:35 WARN : hyperkit: failed to read stderr: EOF
	W0917 02:43:35.758878    6614 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b2:31:92:e0:55:28
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b2:31:92:e0:55:28
	I0917 02:43:35.758892    6614 start.go:729] Will try again in 5 seconds ...
	I0917 02:43:40.760997    6614 start.go:360] acquireMachinesLock for force-systemd-flag-972000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:44:33.460787    6614 start.go:364] duration metric: took 52.699516411s to acquireMachinesLock for "force-systemd-flag-972000"
	I0917 02:44:33.460813    6614 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:44:33.460872    6614 start.go:125] createHost starting for "" (driver="hyperkit")
	I0917 02:44:33.481133    6614 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 02:44:33.481222    6614 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:44:33.481241    6614 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:44:33.489653    6614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54115
	I0917 02:44:33.489977    6614 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:44:33.490289    6614 main.go:141] libmachine: Using API Version  1
	I0917 02:44:33.490297    6614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:44:33.490489    6614 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:44:33.490603    6614 main.go:141] libmachine: (force-systemd-flag-972000) Calling .GetMachineName
	I0917 02:44:33.490681    6614 main.go:141] libmachine: (force-systemd-flag-972000) Calling .DriverName
	I0917 02:44:33.490812    6614 start.go:159] libmachine.API.Create for "force-systemd-flag-972000" (driver="hyperkit")
	I0917 02:44:33.490856    6614 client.go:168] LocalClient.Create starting
	I0917 02:44:33.490886    6614 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem
	I0917 02:44:33.490940    6614 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:33.490951    6614 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:33.491001    6614 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem
	I0917 02:44:33.491042    6614 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:33.491052    6614 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:33.491064    6614 main.go:141] libmachine: Running pre-create checks...
	I0917 02:44:33.491069    6614 main.go:141] libmachine: (force-systemd-flag-972000) Calling .PreCreateCheck
	I0917 02:44:33.491145    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:33.491176    6614 main.go:141] libmachine: (force-systemd-flag-972000) Calling .GetConfigRaw
	I0917 02:44:33.523486    6614 main.go:141] libmachine: Creating machine...
	I0917 02:44:33.523495    6614 main.go:141] libmachine: (force-systemd-flag-972000) Calling .Create
	I0917 02:44:33.523581    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:33.523728    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | I0917 02:44:33.523575    6696 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:44:33.523748    6614 main.go:141] libmachine: (force-systemd-flag-972000) Downloading /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1025/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0917 02:44:33.752249    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | I0917 02:44:33.752157    6696 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/id_rsa...
	I0917 02:44:33.884153    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | I0917 02:44:33.884048    6696 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/force-systemd-flag-972000.rawdisk...
	I0917 02:44:33.884176    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Writing magic tar header
	I0917 02:44:33.884199    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Writing SSH key tar header
	I0917 02:44:33.884759    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | I0917 02:44:33.884717    6696 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000 ...
	I0917 02:44:34.260640    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:34.260657    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/hyperkit.pid
	I0917 02:44:34.260669    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Using UUID 2d34dfa7-5e61-45e3-b2f6-9e908e1180f9
	I0917 02:44:34.285686    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Generated MAC f2:28:98:31:ab:92
	I0917 02:44:34.285703    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-972000
	I0917 02:44:34.285751    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2d34dfa7-5e61-45e3-b2f6-9e908e1180f9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:44:34.285786    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2d34dfa7-5e61-45e3-b2f6-9e908e1180f9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:44:34.285863    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2d34dfa7-5e61-45e3-b2f6-9e908e1180f9", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/force-systemd-flag-972000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/fo
rce-systemd-flag-972000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-972000"}
	I0917 02:44:34.285928    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2d34dfa7-5e61-45e3-b2f6-9e908e1180f9 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/force-systemd-flag-972000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/bzimage,/Users/jenkins/minikube-integr
ation/19648-1025/.minikube/machines/force-systemd-flag-972000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-972000"
	I0917 02:44:34.285963    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:44:34.288922    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 DEBUG: hyperkit: Pid is 6697
	I0917 02:44:34.289921    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 0
	I0917 02:44:34.289933    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:34.290012    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:44:34.291120    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:44:34.291210    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:34.291234    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:34.291285    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:34.291328    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:34.291351    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:34.291364    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:34.291382    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:34.291397    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:34.291412    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:34.291427    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:34.291453    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:34.291467    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:34.291483    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:34.291496    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:34.291513    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:34.291535    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:34.291553    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:34.291568    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:34.291580    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:34.296285    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:44:34.304419    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-flag-972000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:44:34.305232    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:44:34.305254    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:44:34.305266    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:44:34.305279    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:44:34.685042    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:44:34.685058    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:44:34.799703    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:44:34.799730    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:44:34.799746    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:44:34.799761    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:44:34.800603    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:44:34.800614    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:44:36.292468    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 1
	I0917 02:44:36.292486    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:36.292576    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:44:36.293362    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:44:36.293403    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:36.293414    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:36.293422    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:36.293429    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:36.293446    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:36.293459    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:36.293467    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:36.293476    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:36.293485    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:36.293494    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:36.293504    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:36.293512    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:36.293528    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:36.293539    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:36.293547    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:36.293554    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:36.293568    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:36.293586    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:36.293601    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:38.294525    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 2
	I0917 02:44:38.294541    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:38.294604    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:44:38.295403    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:44:38.295463    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:38.295473    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:38.295489    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:38.295501    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:38.295509    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:38.295518    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:38.295525    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:38.295530    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:38.295537    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:38.295544    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:38.295552    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:38.295561    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:38.295568    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:38.295573    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:38.295586    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:38.295596    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:38.295607    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:38.295615    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:38.295626    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:40.206549    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 02:44:40.206707    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 02:44:40.206716    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 02:44:40.226271    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | 2024/09/17 02:44:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 02:44:40.296548    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 3
	I0917 02:44:40.296601    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:40.296704    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:44:40.297785    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:44:40.297871    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:40.297888    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:40.297912    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:40.297930    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:40.297941    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:40.297951    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:40.297961    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:40.297988    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:40.298003    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:40.298016    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:40.298025    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:40.298037    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:40.298067    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:40.298078    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:40.298090    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:40.298101    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:40.298133    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:40.298148    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:40.298160    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:42.298659    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 4
	I0917 02:44:42.298677    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:42.298749    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:44:42.299547    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:44:42.299596    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:42.299613    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:42.299639    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:42.299649    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:42.299656    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:42.299666    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:42.299686    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:42.299698    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:42.299714    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:42.299723    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:42.299746    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:42.299761    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:42.299770    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:42.299776    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:42.299790    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:42.299802    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:42.299810    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:42.299817    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:42.299824    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:44.301929    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 5
	I0917 02:44:44.301942    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:44.301993    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:44:44.302808    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:44:44.302847    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:44.302858    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:44.302871    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:44.302909    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:44.302921    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:44.302932    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:44.302940    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:44.302947    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:44.302954    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:44.302961    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:44.302970    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:44.302978    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:44.302985    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:44.302993    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:44.302999    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:44.303005    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:44.303023    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:44.303031    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:44.303040    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:46.305042    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 6
	I0917 02:44:46.305054    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:46.305130    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:44:46.305908    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:44:46.305963    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:46.305973    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:46.305980    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:46.305986    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:46.305992    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:46.306010    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:46.306018    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:46.306025    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:46.306033    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:46.306039    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:46.306045    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:46.306052    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:46.306060    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:46.306072    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:46.306083    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:46.306092    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:46.306098    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:46.306104    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:46.306110    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:48.308113    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 7
	I0917 02:44:48.308125    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:48.308166    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:44:48.308937    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:44:48.308996    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:48.309007    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:48.309016    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:48.309023    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:48.309030    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:48.309037    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:48.309059    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:48.309067    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:48.309074    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:48.309079    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:48.309095    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:48.309106    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:48.309116    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:48.309124    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:48.309131    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:48.309139    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:48.309145    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:48.309151    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:48.309158    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:50.309367    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 8
	I0917 02:44:50.309379    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:50.309448    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:44:50.310231    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:44:50.310279    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:50.310288    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:50.310299    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:50.310307    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:50.310314    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:50.310321    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:50.310328    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:50.310340    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:50.310348    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:50.310355    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:50.310362    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:50.310368    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:50.310389    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:50.310405    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:50.310413    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:50.310420    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:50.310428    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:50.310434    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:50.310445    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:52.312453    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 9
	I0917 02:44:52.312467    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:52.312537    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:44:52.313329    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:44:52.313387    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:52.313396    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:52.313424    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:52.313435    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:52.313455    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:52.313478    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:52.313491    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:52.313500    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:52.313507    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:52.313519    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:52.313528    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:52.313546    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:52.313556    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:52.313563    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:52.313570    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:52.313576    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:52.313582    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:52.313592    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:52.313606    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:54.315634    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 10
	I0917 02:44:54.315649    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:54.315697    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:44:54.316676    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:44:54.316707    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:54.316715    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:54.316723    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:54.316732    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:54.316747    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:54.316754    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:54.316761    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:54.316768    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:54.316784    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:54.316799    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:54.316808    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:54.316816    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:54.316823    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:54.316830    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:54.316842    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:54.316851    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:54.316864    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:54.316873    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:54.316882    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:56.317965    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 11
	I0917 02:44:56.317977    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:56.318078    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:44:56.319122    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:44:56.319166    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:56.319178    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:56.319193    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:56.319203    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:56.319210    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:56.319219    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:56.319226    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:56.319238    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:56.319248    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:56.319258    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:56.319269    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:56.319276    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:56.319284    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:56.319293    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:56.319301    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:56.319307    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:56.319316    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:56.319323    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:56.319333    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:44:58.321330    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 12
	I0917 02:44:58.321342    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:44:58.321397    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:44:58.322177    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:44:58.322237    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:44:58.322249    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:44:58.322258    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:44:58.322265    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:44:58.322271    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:44:58.322278    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:44:58.322284    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:44:58.322290    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:44:58.322301    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:44:58.322312    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:44:58.322319    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:44:58.322325    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:44:58.322340    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:44:58.322352    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:44:58.322362    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:44:58.322375    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:44:58.322382    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:44:58.322388    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:44:58.322395    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:00.323818    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 13
	I0917 02:45:00.323830    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:00.323875    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:45:00.324683    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:45:00.324719    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:00.324731    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:00.324739    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:00.324758    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:00.324770    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:00.324787    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:00.324799    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:00.324807    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:00.324815    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:00.324839    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:00.324851    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:00.324860    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:00.324868    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:00.324878    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:00.324886    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:00.324893    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:00.324899    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:00.324911    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:00.324923    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:02.326925    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 14
	I0917 02:45:02.326940    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:02.326971    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:45:02.327780    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:45:02.327826    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:02.327846    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:02.327857    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:02.327868    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:02.327881    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:02.327888    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:02.327901    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:02.327909    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:02.327916    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:02.327922    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:02.327928    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:02.327936    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:02.327942    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:02.327950    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:02.327957    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:02.327965    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:02.327979    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:02.327990    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:02.328011    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:04.328356    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 15
	I0917 02:45:04.328372    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:04.328435    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:45:04.329238    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:45:04.329292    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:04.329302    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:04.329310    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:04.329316    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:04.329326    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:04.329335    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:04.329342    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:04.329347    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:04.329354    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:04.329360    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:04.329365    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:04.329378    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:04.329391    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:04.329407    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:04.329419    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:04.329437    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:04.329452    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:04.329460    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:04.329476    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:06.330026    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 16
	I0917 02:45:06.330037    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:06.330119    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:45:06.330912    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:45:06.330964    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:06.330972    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:06.330989    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:06.331004    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:06.331012    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:06.331020    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:06.331037    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:06.331049    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:06.331058    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:06.331066    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:06.331072    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:06.331078    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:06.331084    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:06.331093    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:06.331099    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:06.331107    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:06.331126    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:06.331137    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:06.331152    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:08.333100    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 17
	I0917 02:45:08.333114    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:08.333190    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:45:08.333978    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:45:08.334033    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:08.334052    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:08.334070    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:08.334082    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:08.334089    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:08.334097    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:08.334104    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:08.334118    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:08.334132    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:08.334146    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:08.334153    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:08.334161    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:08.334168    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:08.334176    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:08.334191    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:08.334204    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:08.334212    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:08.334220    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:08.334230    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:10.335304    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 18
	I0917 02:45:10.335318    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:10.335384    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:45:10.336195    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:45:10.336233    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:10.336243    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:10.336251    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:10.336258    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:10.336266    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:10.336272    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:10.336279    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:10.336286    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:10.336298    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:10.336310    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:10.336317    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:10.336325    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:10.336334    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:10.336341    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:10.336348    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:10.336355    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:10.336362    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:10.336368    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:10.336383    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:12.337285    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 19
	I0917 02:45:12.337302    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:12.337372    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:45:12.338167    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:45:12.338216    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:12.338228    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:12.338242    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:12.338248    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:12.338254    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:12.338259    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:12.338270    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:12.338290    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:12.338298    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:12.338303    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:12.338309    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:12.338314    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:12.338322    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:12.338327    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:12.338333    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:12.338341    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:12.338347    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:12.338352    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:12.338362    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:14.339934    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 20
	I0917 02:45:14.339945    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:14.339989    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:45:14.340778    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:45:14.340800    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:14.340807    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:14.340820    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:14.340829    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:14.340836    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:14.340850    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:14.340858    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:14.340866    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:14.340875    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:14.340884    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:14.340891    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:14.340899    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:14.340907    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:14.340915    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:14.340930    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:14.340941    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:14.340949    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:14.340972    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:14.340989    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:16.341739    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 21
	I0917 02:45:16.341753    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:16.341865    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:45:16.342670    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:45:16.342707    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:16.342714    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:16.342726    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:16.342734    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:16.342740    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:16.342748    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:16.342755    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:16.342760    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:16.342766    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:16.342782    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:16.342790    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:16.342797    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:16.342803    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:16.342808    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:16.342821    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:16.342829    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:16.342837    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:16.342845    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:16.342853    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:18.344154    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 22
	I0917 02:45:18.344166    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:18.344279    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:45:18.345132    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:45:18.345194    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:18.345205    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:18.345212    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:18.345219    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:18.345225    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:18.345231    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:18.345242    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:18.345251    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:18.345258    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:18.345266    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:18.345273    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:18.345280    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:18.345291    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:18.345299    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:18.345306    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:18.345311    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:18.345325    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:18.345338    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:18.345354    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:20.346515    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 23
	I0917 02:45:20.346529    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:20.346609    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:45:20.347454    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:45:20.347499    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:20.347509    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:20.347522    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:20.347541    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:20.347548    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:20.347554    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:20.347562    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:20.347567    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:20.347574    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:20.347587    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:20.347594    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:20.347601    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:20.347608    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:20.347614    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:20.347627    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:20.347648    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:20.347664    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:20.347676    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:20.347695    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:22.349681    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 24
	I0917 02:45:22.349694    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:22.349717    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:45:22.350533    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:45:22.350578    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:22.350591    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:22.350600    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:22.350605    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:22.350612    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:22.350621    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:22.350630    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:22.350638    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:22.350646    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:22.350656    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:22.350664    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:22.350683    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:22.350693    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:22.350707    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:22.350715    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:22.350722    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:22.350731    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:22.350738    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:22.350750    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:24.352762    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 25
	I0917 02:45:24.352775    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:24.352829    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:45:24.353652    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:45:24.353697    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:24.353706    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:24.353714    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:24.353725    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:24.353732    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:24.353738    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:24.353744    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:24.353750    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:24.353755    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:24.353762    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:24.353767    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:24.353773    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:24.353781    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:24.353787    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:24.353799    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:24.353817    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:24.353826    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:24.353832    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:24.353840    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:26.354821    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 26
	I0917 02:45:26.354833    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:26.354905    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:45:26.355752    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:45:26.355785    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:26.355793    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:26.355803    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:26.355810    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:26.355816    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:26.355822    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:26.355832    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:26.355839    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:26.355845    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:26.355853    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:26.355861    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:26.355868    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:26.355875    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:26.355882    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:26.355897    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:26.355913    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:26.355925    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:26.355933    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:26.355941    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:28.358020    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 27
	I0917 02:45:28.358033    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:28.358104    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:45:28.358885    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:45:28.358936    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:28.358949    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:28.358956    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:28.358961    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:28.358972    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:28.358980    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:28.358988    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:28.358997    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:28.359003    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:28.359009    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:28.359015    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:28.359023    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:28.359040    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:28.359048    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:28.359058    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:28.359065    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:28.359072    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:28.359080    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:28.359097    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:30.361134    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 28
	I0917 02:45:30.361537    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:30.361727    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:45:30.362031    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:45:30.362100    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:30.362120    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:30.362195    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:30.362229    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:30.362246    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:30.362257    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:30.362268    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:30.362279    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:30.362296    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:30.362311    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:30.362320    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:30.362328    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:30.362336    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:30.362361    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:30.362376    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:30.362384    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:30.362392    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:30.362406    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:30.362413    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:32.364309    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Attempt 29
	I0917 02:45:32.364327    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:45:32.364340    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | hyperkit pid from json: 6697
	I0917 02:45:32.365130    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Searching for f2:28:98:31:ab:92 in /var/db/dhcpd_leases ...
	I0917 02:45:32.365157    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:45:32.365172    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:45:32.365182    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:45:32.365209    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:45:32.365221    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:45:32.365230    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:45:32.365237    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:45:32.365243    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:45:32.365269    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:45:32.365286    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:45:32.365304    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:45:32.365317    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:45:32.365335    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:45:32.365343    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:45:32.365353    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:45:32.365361    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:45:32.365376    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:45:32.365388    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:45:32.365397    6614 main.go:141] libmachine: (force-systemd-flag-972000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:45:34.366604    6614 client.go:171] duration metric: took 1m0.875461644s to LocalClient.Create
	I0917 02:45:36.367892    6614 start.go:128] duration metric: took 1m2.906725768s to createHost
	I0917 02:45:36.367904    6614 start.go:83] releasing machines lock for "force-systemd-flag-972000", held for 1m2.9068185s
	W0917 02:45:36.367998    6614 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-972000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:28:98:31:ab:92
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-972000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:28:98:31:ab:92
	I0917 02:45:36.430153    6614 out.go:201] 
	W0917 02:45:36.451182    6614 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:28:98:31:ab:92
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:28:98:31:ab:92
	W0917 02:45:36.451195    6614 out.go:270] * 
	* 
	W0917 02:45:36.451860    6614 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:45:36.513113    6614 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-972000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-972000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-972000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (175.363358ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-flag-972000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-972000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-17 02:45:36.800031 -0700 PDT m=+4082.919510206
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-972000 -n force-systemd-flag-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-972000 -n force-systemd-flag-972000: exit status 7 (78.351595ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 02:45:36.876476    6719 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0917 02:45:36.876497    6719 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-972000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-flag-972000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-972000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-972000: (5.263493029s)
--- FAIL: TestForceSystemdFlag (252.15s)

                                                
                                    
x
+
TestForceSystemdEnv (233.72s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-601000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0917 02:38:59.180406    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-601000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (3m48.124070705s)

                                                
                                                
-- stdout --
	* [force-systemd-env-601000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-env-601000" primary control-plane node in "force-systemd-env-601000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-env-601000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:38:39.590272    6541 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:38:39.590441    6541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:38:39.590446    6541 out.go:358] Setting ErrFile to fd 2...
	I0917 02:38:39.590450    6541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:38:39.590613    6541 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:38:39.592118    6541 out.go:352] Setting JSON to false
	I0917 02:38:39.614590    6541 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4089,"bootTime":1726561830,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 02:38:39.614682    6541 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:38:39.636478    6541 out.go:177] * [force-systemd-env-601000] minikube v1.34.0 on Darwin 14.6.1
	I0917 02:38:39.676984    6541 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:38:39.677023    6541 notify.go:220] Checking for updates...
	I0917 02:38:39.718750    6541 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:38:39.760913    6541 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 02:38:39.781720    6541 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:38:39.802020    6541 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:38:39.822871    6541 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0917 02:38:39.844076    6541 config.go:182] Loaded profile config "offline-docker-246000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:38:39.844150    6541 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:38:39.873010    6541 out.go:177] * Using the hyperkit driver based on user configuration
	I0917 02:38:39.914720    6541 start.go:297] selected driver: hyperkit
	I0917 02:38:39.914731    6541 start.go:901] validating driver "hyperkit" against <nil>
	I0917 02:38:39.914740    6541 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:38:39.917535    6541 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:38:39.917659    6541 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19648-1025/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 02:38:39.925862    6541 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 02:38:39.929706    6541 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:38:39.929740    6541 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 02:38:39.929776    6541 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:38:39.930024    6541 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 02:38:39.930054    6541 cni.go:84] Creating CNI manager for ""
	I0917 02:38:39.930095    6541 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:38:39.930106    6541 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 02:38:39.930172    6541 start.go:340] cluster config:
	{Name:force-systemd-env-601000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:38:39.930268    6541 iso.go:125] acquiring lock: {Name:mkc407d80a0f8e78f0e24d63c464bb315c80139b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:38:39.971890    6541 out.go:177] * Starting "force-systemd-env-601000" primary control-plane node in "force-systemd-env-601000" cluster
	I0917 02:38:39.992723    6541 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:38:39.992751    6541 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 02:38:39.992760    6541 cache.go:56] Caching tarball of preloaded images
	I0917 02:38:39.992853    6541 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:38:39.992861    6541 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:38:39.992927    6541 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/force-systemd-env-601000/config.json ...
	I0917 02:38:39.992943    6541 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/force-systemd-env-601000/config.json: {Name:mkb52658c89226dfe7ebcc324802fac0b8dfd4c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:38:39.993269    6541 start.go:360] acquireMachinesLock for force-systemd-env-601000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:39:18.683069    6541 start.go:364] duration metric: took 38.689605168s to acquireMachinesLock for "force-systemd-env-601000"
	I0917 02:39:18.683106    6541 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:39:18.683163    6541 start.go:125] createHost starting for "" (driver="hyperkit")
	I0917 02:39:18.704340    6541 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 02:39:18.704504    6541 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:39:18.704538    6541 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:39:18.713048    6541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54068
	I0917 02:39:18.713388    6541 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:39:18.713900    6541 main.go:141] libmachine: Using API Version  1
	I0917 02:39:18.713935    6541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:39:18.714232    6541 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:39:18.714393    6541 main.go:141] libmachine: (force-systemd-env-601000) Calling .GetMachineName
	I0917 02:39:18.714490    6541 main.go:141] libmachine: (force-systemd-env-601000) Calling .DriverName
	I0917 02:39:18.714604    6541 start.go:159] libmachine.API.Create for "force-systemd-env-601000" (driver="hyperkit")
	I0917 02:39:18.714628    6541 client.go:168] LocalClient.Create starting
	I0917 02:39:18.714661    6541 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem
	I0917 02:39:18.714709    6541 main.go:141] libmachine: Decoding PEM data...
	I0917 02:39:18.714724    6541 main.go:141] libmachine: Parsing certificate...
	I0917 02:39:18.714784    6541 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem
	I0917 02:39:18.714821    6541 main.go:141] libmachine: Decoding PEM data...
	I0917 02:39:18.714834    6541 main.go:141] libmachine: Parsing certificate...
	I0917 02:39:18.714846    6541 main.go:141] libmachine: Running pre-create checks...
	I0917 02:39:18.714853    6541 main.go:141] libmachine: (force-systemd-env-601000) Calling .PreCreateCheck
	I0917 02:39:18.714930    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:18.715075    6541 main.go:141] libmachine: (force-systemd-env-601000) Calling .GetConfigRaw
	I0917 02:39:18.747461    6541 main.go:141] libmachine: Creating machine...
	I0917 02:39:18.747470    6541 main.go:141] libmachine: (force-systemd-env-601000) Calling .Create
	I0917 02:39:18.747573    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:18.747748    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | I0917 02:39:18.747562    6565 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:39:18.747774    6541 main.go:141] libmachine: (force-systemd-env-601000) Downloading /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1025/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0917 02:39:18.955301    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | I0917 02:39:18.955204    6565 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/id_rsa...
	I0917 02:39:19.044390    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | I0917 02:39:19.044317    6565 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/force-systemd-env-601000.rawdisk...
	I0917 02:39:19.044402    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Writing magic tar header
	I0917 02:39:19.044419    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Writing SSH key tar header
	I0917 02:39:19.044993    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | I0917 02:39:19.044941    6565 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000 ...
	I0917 02:39:19.423330    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:19.423344    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/hyperkit.pid
	I0917 02:39:19.423391    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Using UUID caa159c0-06ed-46db-ae90-70871ead0790
	I0917 02:39:19.448318    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Generated MAC fa:11:91:46:2d:fd
	I0917 02:39:19.448334    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-601000
	I0917 02:39:19.448366    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"caa159c0-06ed-46db-ae90-70871ead0790", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:39:19.448392    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"caa159c0-06ed-46db-ae90-70871ead0790", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:39:19.448459    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "caa159c0-06ed-46db-ae90-70871ead0790", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/force-systemd-env-601000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-sys
temd-env-601000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-601000"}
	I0917 02:39:19.448506    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U caa159c0-06ed-46db-ae90-70871ead0790 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/force-systemd-env-601000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/bzimage,/Users/jenkins/minikube-integration/19
648-1025/.minikube/machines/force-systemd-env-601000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-601000"
	I0917 02:39:19.448518    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:39:19.451609    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 DEBUG: hyperkit: Pid is 6567
	I0917 02:39:19.452038    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 0
	I0917 02:39:19.452060    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:19.452118    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:19.453018    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:19.453085    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:19.453095    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:19.453117    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:19.453137    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:19.453172    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:19.453208    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:19.453220    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:19.453240    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:19.453256    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:19.453275    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:19.453292    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:19.453315    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:19.453330    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:19.453341    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:19.453352    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:19.453360    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:19.453368    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:19.453383    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:19.453392    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:19.459595    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:39:19.468126    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:39:19.469056    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:39:19.469079    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:39:19.469100    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:39:19.469115    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:39:19.846698    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:39:19.846713    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:39:19.961246    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:39:19.961271    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:39:19.961284    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:39:19.961295    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:39:19.962164    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:39:19.962175    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:39:21.453943    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 1
	I0917 02:39:21.453958    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:21.454047    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:21.454922    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:21.454978    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:21.454985    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:21.454995    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:21.455001    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:21.455022    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:21.455037    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:21.455048    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:21.455061    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:21.455075    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:21.455084    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:21.455091    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:21.455096    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:21.455106    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:21.455114    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:21.455129    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:21.455137    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:21.455143    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:21.455151    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:21.455167    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:23.456958    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 2
	I0917 02:39:23.456971    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:23.457039    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:23.457840    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:23.457896    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:23.457907    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:23.457914    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:23.457919    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:23.457925    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:23.457932    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:23.457937    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:23.457943    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:23.457950    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:23.457958    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:23.457963    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:23.457974    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:23.457984    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:23.457991    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:23.457996    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:23.458011    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:23.458023    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:23.458035    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:23.458050    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:25.341386    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 02:39:25.341496    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 02:39:25.341505    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 02:39:25.361705    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:39:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 02:39:25.458495    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 3
	I0917 02:39:25.458521    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:25.458659    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:25.459814    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:25.459871    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:25.459884    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:25.459907    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:25.459921    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:25.459939    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:25.459958    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:25.459969    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:25.459981    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:25.459994    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:25.460020    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:25.460052    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:25.460067    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:25.460091    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:25.460108    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:25.460119    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:25.460128    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:25.460151    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:25.460171    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:25.460184    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:27.460092    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 4
	I0917 02:39:27.460109    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:27.460181    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:27.460976    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:27.461028    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:27.461039    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:27.461051    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:27.461063    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:27.461071    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:27.461083    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:27.461089    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:27.461095    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:27.461102    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:27.461108    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:27.461114    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:27.461119    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:27.461127    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:27.461134    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:27.461141    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:27.461149    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:27.461167    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:27.461179    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:27.461189    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:29.461939    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 5
	I0917 02:39:29.461951    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:29.462028    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:29.462826    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:29.462864    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:29.462872    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:29.462881    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:29.462889    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:29.462896    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:29.462901    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:29.462908    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:29.462921    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:29.462932    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:29.462960    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:29.462972    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:29.462981    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:29.462988    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:29.462995    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:29.463002    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:29.463012    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:29.463019    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:29.463026    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:29.463031    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:31.463967    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 6
	I0917 02:39:31.463982    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:31.464018    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:31.464795    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:31.464855    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:31.464865    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:31.464872    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:31.464878    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:31.464889    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:31.464901    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:31.464942    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:31.464954    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:31.464963    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:31.464972    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:31.464986    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:31.464998    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:31.465005    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:31.465013    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:31.465021    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:31.465029    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:31.465036    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:31.465051    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:31.465061    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:33.465711    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 7
	I0917 02:39:33.465725    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:33.465815    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:33.466634    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:33.466653    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:33.466660    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:33.466674    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:33.466680    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:33.466692    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:33.466720    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:33.466727    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:33.466734    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:33.466741    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:33.466748    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:33.466756    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:33.466763    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:33.466770    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:33.466777    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:33.466783    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:33.466790    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:33.466798    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:33.466805    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:33.466811    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:35.467151    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 8
	I0917 02:39:35.467163    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:35.467219    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:35.467956    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:35.468011    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:35.468023    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:35.468033    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:35.468039    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:35.468055    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:35.468061    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:35.468067    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:35.468073    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:35.468081    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:35.468095    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:35.468111    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:35.468117    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:35.468123    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:35.468131    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:35.468147    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:35.468159    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:35.468173    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:35.468182    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:35.468191    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:37.470238    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 9
	I0917 02:39:37.470249    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:37.470305    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:37.471079    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:37.471129    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:37.471139    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:37.471148    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:37.471153    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:37.471159    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:37.471167    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:37.471179    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:37.471192    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:37.471265    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:37.471273    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:37.471279    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:37.471284    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:37.471290    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:37.471295    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:37.471317    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:37.471329    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:37.471337    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:37.471343    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:37.471360    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:39.471985    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 10
	I0917 02:39:39.472000    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:39.472066    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:39.472852    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:39.472919    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:39.472933    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:39.472945    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:39.472953    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:39.472958    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:39.472967    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:39.472976    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:39.472994    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:39.473007    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:39.473020    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:39.473028    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:39.473034    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:39.473042    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:39.473055    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:39.473067    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:39.473075    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:39.473083    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:39.473089    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:39.473097    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:41.474303    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 11
	I0917 02:39:41.474319    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:41.474385    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:41.475142    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:41.475200    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:41.475209    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:41.475217    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:41.475223    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:41.475244    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:41.475259    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:41.475273    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:41.475283    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:41.475290    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:41.475298    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:41.475305    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:41.475311    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:41.475318    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:41.475325    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:41.475332    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:41.475340    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:41.475346    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:41.475354    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:41.475370    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:43.475725    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 12
	I0917 02:39:43.475738    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:43.475798    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:43.476588    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:43.476641    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:43.476650    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:43.476673    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:43.476684    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:43.476700    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:43.476709    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:43.476716    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:43.476722    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:43.476730    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:43.476758    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:43.476770    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:43.476777    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:43.476786    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:43.476792    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:43.476821    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:43.476828    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:43.476837    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:43.476847    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:43.476859    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:45.477550    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 13
	I0917 02:39:45.477565    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:45.477609    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:45.478404    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:45.478487    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:45.478508    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:45.478535    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:45.478549    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:45.478557    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:45.478565    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:45.478581    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:45.478593    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:45.478610    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:45.478618    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:45.478625    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:45.478632    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:45.478639    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:45.478644    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:45.478650    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:45.478658    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:45.478664    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:45.478672    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:45.478682    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:47.479000    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 14
	I0917 02:39:47.479015    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:47.479065    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:47.479831    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:47.479883    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:47.479901    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:47.479913    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:47.479922    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:47.479938    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:47.479944    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:47.479951    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:47.479959    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:47.479965    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:47.479972    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:47.479987    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:47.479995    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:47.480004    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:47.480012    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:47.480025    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:47.480036    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:47.480051    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:47.480062    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:47.480073    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:49.481829    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 15
	I0917 02:39:49.481844    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:49.481892    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:49.482717    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:49.482751    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:49.482762    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:49.482773    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:49.482783    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:49.482795    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:49.482808    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:49.482820    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:49.482842    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:49.482863    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:49.482880    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:49.482892    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:49.482906    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:49.482920    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:49.482931    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:49.482950    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:49.482963    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:49.482973    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:49.482986    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:49.483007    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:51.484046    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 16
	I0917 02:39:51.484061    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:51.484105    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:51.484899    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:51.484957    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:51.484966    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:51.484973    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:51.484987    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:51.484994    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:51.485002    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:51.485014    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:51.485033    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:51.485047    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:51.485056    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:51.485064    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:51.485071    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:51.485076    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:51.485088    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:51.485102    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:51.485109    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:51.485116    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:51.485123    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:51.485129    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:53.486292    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 17
	I0917 02:39:53.486306    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:53.486339    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:53.487109    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:53.487135    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:53.487145    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:53.487152    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:53.487157    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:53.487165    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:53.487171    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:53.487178    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:53.487185    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:53.487192    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:53.487200    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:53.487207    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:53.487215    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:53.487230    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:53.487242    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:53.487260    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:53.487269    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:53.487280    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:53.487287    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:53.487295    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:55.488817    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 18
	I0917 02:39:55.488830    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:55.488842    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:55.489615    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:55.489667    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:55.489678    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:55.489688    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:55.489699    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:55.489706    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:55.489719    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:55.489725    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:55.489731    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:55.489740    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:55.489757    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:55.489769    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:55.489777    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:55.489783    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:55.489796    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:55.489809    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:55.489830    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:55.489841    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:55.489849    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:55.489855    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:57.491584    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 19
	I0917 02:39:57.491599    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:57.491718    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:57.492476    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:57.492552    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:57.492565    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:57.492577    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:57.492603    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:57.492615    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:57.492623    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:57.492630    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:57.492644    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:57.492652    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:57.492663    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:57.492670    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:57.492677    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:57.492684    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:57.492690    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:57.492697    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:57.492709    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:57.492722    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:57.492729    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:57.492737    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:39:59.494698    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 20
	I0917 02:39:59.494716    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:39:59.494814    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:39:59.495644    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:39:59.495693    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:39:59.495701    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:39:59.495715    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:39:59.495727    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:39:59.495753    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:39:59.495768    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:39:59.495777    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:39:59.495785    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:39:59.495791    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:39:59.495799    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:39:59.495819    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:39:59.495829    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:39:59.495844    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:39:59.495861    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:39:59.495871    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:39:59.495881    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:39:59.495894    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:39:59.495907    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:39:59.495916    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:01.497276    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 21
	I0917 02:40:01.497291    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:01.497337    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:40:01.498116    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:40:01.498175    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:01.498187    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:01.498194    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:01.498201    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:01.498212    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:01.498227    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:01.498234    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:01.498240    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:01.498246    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:01.498264    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:01.498276    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:01.498283    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:01.498290    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:01.498304    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:01.498317    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:01.498332    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:01.498344    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:01.498352    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:01.498358    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:03.499319    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 22
	I0917 02:40:03.499333    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:03.499404    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:40:03.500286    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:40:03.500349    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:03.500361    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:03.500370    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:03.500378    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:03.500386    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:03.500392    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:03.500398    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:03.500407    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:03.500414    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:03.500422    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:03.500436    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:03.500444    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:03.500450    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:03.500459    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:03.500465    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:03.500480    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:03.500487    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:03.500494    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:03.500503    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:05.501357    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 23
	I0917 02:40:05.501376    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:05.501458    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:40:05.502293    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:40:05.502345    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:05.502358    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:05.502385    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:05.502397    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:05.502405    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:05.502414    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:05.502420    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:05.502426    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:05.502434    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:05.502441    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:05.502448    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:05.502456    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:05.502462    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:05.502478    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:05.502485    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:05.502493    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:05.502500    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:05.502505    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:05.502510    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:07.504273    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 24
	I0917 02:40:07.504298    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:07.504354    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:40:07.505140    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:40:07.505197    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:07.505220    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:07.505230    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:07.505242    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:07.505257    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:07.505264    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:07.505272    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:07.505279    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:07.505285    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:07.505290    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:07.505300    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:07.505309    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:07.505316    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:07.505323    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:07.505329    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:07.505334    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:07.505339    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:07.505345    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:07.505353    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:09.507405    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 25
	I0917 02:40:09.507417    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:09.507462    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:40:09.508304    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:40:09.508340    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:09.508347    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:09.508356    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:09.508367    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:09.508379    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:09.508385    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:09.508393    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:09.508402    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:09.508410    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:09.508418    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:09.508424    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:09.508431    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:09.508437    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:09.508445    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:09.508452    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:09.508459    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:09.508465    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:09.508473    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:09.508481    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:11.508661    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 26
	I0917 02:40:11.508677    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:11.508721    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:40:11.509555    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:40:11.509603    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:11.509614    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:11.509635    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:11.509643    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:11.509650    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:11.509657    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:11.509664    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:11.509670    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:11.509678    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:11.509684    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:11.509692    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:11.509699    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:11.509704    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:11.509720    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:11.509727    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:11.509740    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:11.509747    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:11.509755    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:11.509763    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:13.511814    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 27
	I0917 02:40:13.511828    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:13.511841    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:40:13.512635    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:40:13.512689    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:13.512701    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:13.512710    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:13.512719    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:13.512748    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:13.512760    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:13.512767    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:13.512775    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:13.512797    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:13.512808    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:13.512824    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:13.512837    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:13.512845    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:13.512852    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:13.512863    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:13.512873    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:13.512887    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:13.512895    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:13.512902    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:15.514441    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 28
	I0917 02:40:15.514455    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:15.514509    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:40:15.515323    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:40:15.515362    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:15.515371    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:15.515380    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:15.515385    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:15.515392    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:15.515400    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:15.515407    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:15.515413    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:15.515419    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:15.515425    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:15.515430    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:15.515438    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:15.515444    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:15.515453    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:15.515458    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:15.515474    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:15.515485    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:15.515496    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:15.515504    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:17.517520    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 29
	I0917 02:40:17.517534    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:17.517606    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:40:17.518361    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for fa:11:91:46:2d:fd in /var/db/dhcpd_leases ...
	I0917 02:40:17.518422    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:40:17.518430    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:40:17.518438    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:40:17.518444    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:40:17.518451    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:40:17.518459    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:40:17.518465    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:40:17.518471    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:40:17.518483    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:40:17.518496    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:40:17.518504    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:40:17.518512    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:40:17.518527    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:40:17.518538    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:40:17.518549    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:40:17.518562    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:40:17.518584    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:40:17.518597    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:40:17.518610    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:40:19.520604    6541 client.go:171] duration metric: took 1m0.805688542s to LocalClient.Create
	I0917 02:40:21.521432    6541 start.go:128] duration metric: took 1m2.837958159s to createHost
	I0917 02:40:21.521446    6541 start.go:83] releasing machines lock for "force-systemd-env-601000", held for 1m2.838079378s
	W0917 02:40:21.521469    6541 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for fa:11:91:46:2d:fd
	I0917 02:40:21.521802    6541 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:40:21.521829    6541 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:40:21.530415    6541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54070
	I0917 02:40:21.530769    6541 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:40:21.531109    6541 main.go:141] libmachine: Using API Version  1
	I0917 02:40:21.531123    6541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:40:21.531344    6541 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:40:21.531703    6541 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:40:21.531727    6541 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:40:21.540008    6541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54072
	I0917 02:40:21.540350    6541 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:40:21.540704    6541 main.go:141] libmachine: Using API Version  1
	I0917 02:40:21.540715    6541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:40:21.540924    6541 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:40:21.541048    6541 main.go:141] libmachine: (force-systemd-env-601000) Calling .GetState
	I0917 02:40:21.541128    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:21.541197    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:40:21.542164    6541 main.go:141] libmachine: (force-systemd-env-601000) Calling .DriverName
	I0917 02:40:21.584624    6541 out.go:177] * Deleting "force-systemd-env-601000" in hyperkit ...
	I0917 02:40:21.605806    6541 main.go:141] libmachine: (force-systemd-env-601000) Calling .Remove
	I0917 02:40:21.605926    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:21.605936    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:21.606014    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:40:21.606983    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:21.607032    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | waiting for graceful shutdown
	I0917 02:40:22.608338    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:22.608422    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:40:22.609377    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | waiting for graceful shutdown
	I0917 02:40:23.609620    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:23.609696    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:40:23.611468    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | waiting for graceful shutdown
	I0917 02:40:24.612297    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:24.612370    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:40:24.613205    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | waiting for graceful shutdown
	I0917 02:40:25.614966    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:25.615045    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:40:25.615604    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | waiting for graceful shutdown
	I0917 02:40:26.616346    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:26.616443    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6567
	I0917 02:40:26.617540    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | sending sigkill
	I0917 02:40:26.617552    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:40:26.626720    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:40:26 WARN : hyperkit: failed to read stderr: EOF
	I0917 02:40:26.626735    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:40:26 WARN : hyperkit: failed to read stdout: EOF
	W0917 02:40:26.642991    6541 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for fa:11:91:46:2d:fd
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for fa:11:91:46:2d:fd
	I0917 02:40:26.643010    6541 start.go:729] Will try again in 5 seconds ...
	I0917 02:40:31.645116    6541 start.go:360] acquireMachinesLock for force-systemd-env-601000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:41:24.377389    6541 start.go:364] duration metric: took 52.73199407s to acquireMachinesLock for "force-systemd-env-601000"
	I0917 02:41:24.377427    6541 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:41:24.377480    6541 start.go:125] createHost starting for "" (driver="hyperkit")
	I0917 02:41:24.399261    6541 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 02:41:24.399347    6541 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:41:24.399370    6541 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:41:24.408052    6541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54076
	I0917 02:41:24.408432    6541 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:41:24.408810    6541 main.go:141] libmachine: Using API Version  1
	I0917 02:41:24.408828    6541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:41:24.409035    6541 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:41:24.409167    6541 main.go:141] libmachine: (force-systemd-env-601000) Calling .GetMachineName
	I0917 02:41:24.409266    6541 main.go:141] libmachine: (force-systemd-env-601000) Calling .DriverName
	I0917 02:41:24.409377    6541 start.go:159] libmachine.API.Create for "force-systemd-env-601000" (driver="hyperkit")
	I0917 02:41:24.409393    6541 client.go:168] LocalClient.Create starting
	I0917 02:41:24.409416    6541 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem
	I0917 02:41:24.409468    6541 main.go:141] libmachine: Decoding PEM data...
	I0917 02:41:24.409488    6541 main.go:141] libmachine: Parsing certificate...
	I0917 02:41:24.409529    6541 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem
	I0917 02:41:24.409569    6541 main.go:141] libmachine: Decoding PEM data...
	I0917 02:41:24.409576    6541 main.go:141] libmachine: Parsing certificate...
	I0917 02:41:24.409590    6541 main.go:141] libmachine: Running pre-create checks...
	I0917 02:41:24.409596    6541 main.go:141] libmachine: (force-systemd-env-601000) Calling .PreCreateCheck
	I0917 02:41:24.409671    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:24.409701    6541 main.go:141] libmachine: (force-systemd-env-601000) Calling .GetConfigRaw
	I0917 02:41:24.419903    6541 main.go:141] libmachine: Creating machine...
	I0917 02:41:24.419911    6541 main.go:141] libmachine: (force-systemd-env-601000) Calling .Create
	I0917 02:41:24.420019    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:24.420144    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | I0917 02:41:24.420014    6603 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:41:24.420193    6541 main.go:141] libmachine: (force-systemd-env-601000) Downloading /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1025/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0917 02:41:24.786675    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | I0917 02:41:24.786614    6603 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/id_rsa...
	I0917 02:41:24.997200    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | I0917 02:41:24.997103    6603 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/force-systemd-env-601000.rawdisk...
	I0917 02:41:24.997222    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Writing magic tar header
	I0917 02:41:24.997233    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Writing SSH key tar header
	I0917 02:41:24.997822    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | I0917 02:41:24.997777    6603 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000 ...
	I0917 02:41:25.372481    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:25.372503    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/hyperkit.pid
	I0917 02:41:25.372544    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Using UUID 117946d7-664a-4762-8bb7-1a4d224c4b17
	I0917 02:41:25.398415    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Generated MAC 36:bd:ee:4f:b1:7b
	I0917 02:41:25.398434    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-601000
	I0917 02:41:25.398463    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"117946d7-664a-4762-8bb7-1a4d224c4b17", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:41:25.398493    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"117946d7-664a-4762-8bb7-1a4d224c4b17", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:41:25.398581    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "117946d7-664a-4762-8bb7-1a4d224c4b17", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/force-systemd-env-601000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-sys
temd-env-601000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-601000"}
	I0917 02:41:25.398634    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 117946d7-664a-4762-8bb7-1a4d224c4b17 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/force-systemd-env-601000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/bzimage,/Users/jenkins/minikube-integration/19
648-1025/.minikube/machines/force-systemd-env-601000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-601000"
	I0917 02:41:25.398648    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:41:25.401691    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 DEBUG: hyperkit: Pid is 6613
	I0917 02:41:25.402111    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 0
	I0917 02:41:25.402127    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:25.402253    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:41:25.403121    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:41:25.403198    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:25.403217    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:25.403244    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:25.403280    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:25.403297    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:25.403310    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:25.403347    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:25.403366    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:25.403375    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:25.403384    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:25.403391    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:25.403404    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:25.403427    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:25.403440    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:25.403449    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:25.403455    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:25.403473    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:25.403481    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:25.403489    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:25.409772    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:41:25.417801    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/force-systemd-env-601000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:41:25.418736    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:41:25.418750    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:41:25.418759    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:41:25.418765    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:41:25.796498    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:41:25.796514    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:41:25.911136    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:41:25.911154    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:41:25.911167    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:41:25.911198    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:41:25.912032    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:41:25.912058    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:41:27.403595    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 1
	I0917 02:41:27.403611    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:27.403747    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:41:27.404547    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:41:27.404606    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:27.404616    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:27.404625    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:27.404630    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:27.404650    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:27.404659    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:27.404667    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:27.404673    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:27.404679    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:27.404690    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:27.404697    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:27.404703    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:27.404710    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:27.404716    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:27.404725    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:27.404732    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:27.404739    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:27.404756    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:27.404764    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:29.405876    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 2
	I0917 02:41:29.405896    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:29.405974    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:41:29.406839    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:41:29.406896    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:29.406908    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:29.406919    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:29.406929    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:29.406948    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:29.406981    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:29.406992    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:29.407000    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:29.407015    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:29.407025    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:29.407034    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:29.407042    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:29.407049    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:29.407057    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:29.407071    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:29.407084    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:29.407096    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:29.407104    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:29.407117    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:31.329863    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:31 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 02:41:31.329964    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:31 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 02:41:31.329975    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:31 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 02:41:31.349578    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | 2024/09/17 02:41:31 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 02:41:31.409289    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 3
	I0917 02:41:31.409311    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:31.409512    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:41:31.410968    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:41:31.411084    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:31.411099    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:31.411123    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:31.411134    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:31.411146    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:31.411163    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:31.411177    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:31.411187    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:31.411198    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:31.411212    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:31.411222    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:31.411252    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:31.411271    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:31.411285    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:31.411304    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:31.411327    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:31.411343    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:31.411354    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:31.411362    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:33.412861    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 4
	I0917 02:41:33.412876    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:33.412963    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:41:33.413757    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:41:33.413807    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:33.413823    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:33.413839    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:33.413852    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:33.413868    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:33.413877    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:33.413892    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:33.413903    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:33.413910    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:33.413918    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:33.413932    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:33.413944    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:33.413952    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:33.413960    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:33.413966    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:33.413971    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:33.413977    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:33.413983    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:33.413991    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:35.414285    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 5
	I0917 02:41:35.414298    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:35.414363    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:41:35.415132    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:41:35.415186    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:35.415194    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:35.415207    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:35.415216    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:35.415222    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:35.415231    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:35.415245    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:35.415262    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:35.415271    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:35.415293    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:35.415303    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:35.415313    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:35.415328    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:35.415339    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:35.415347    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:35.415353    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:35.415359    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:35.415366    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:35.415376    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:37.415918    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 6
	I0917 02:41:37.415930    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:37.415989    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:41:37.416929    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:41:37.416969    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:37.416979    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:37.416990    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:37.416998    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:37.417019    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:37.417026    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:37.417032    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:37.417041    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:37.417048    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:37.417056    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:37.417064    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:37.417072    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:37.417082    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:37.417090    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:37.417097    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:37.417104    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:37.417110    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:37.417117    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:37.417123    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:39.417502    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 7
	I0917 02:41:39.417518    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:39.417565    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:41:39.418496    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:41:39.418524    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:39.418534    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:39.418542    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:39.418550    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:39.418559    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:39.418565    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:39.418586    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:39.418595    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:39.418602    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:39.418610    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:39.418617    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:39.418625    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:39.418631    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:39.418637    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:39.418648    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:39.418660    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:39.418667    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:39.418680    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:39.418696    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:41.419578    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 8
	I0917 02:41:41.419602    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:41.419648    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:41:41.420428    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:41:41.420492    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:41.420505    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:41.420512    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:41.420517    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:41.420525    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:41.420530    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:41.420536    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:41.420545    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:41.420552    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:41.420557    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:41.420578    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:41.420591    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:41.420599    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:41.420611    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:41.420618    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:41.420624    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:41.420639    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:41.420650    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:41.420668    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:43.422607    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 9
	I0917 02:41:43.422621    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:43.422655    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:41:43.423541    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:41:43.423600    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:43.423614    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:43.423620    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:43.423626    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:43.423634    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:43.423640    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:43.423651    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:43.423659    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:43.423665    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:43.423671    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:43.423676    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:43.423683    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:43.423688    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:43.423704    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:43.423717    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:43.423725    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:43.423732    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:43.423739    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:43.423744    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:45.425855    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 10
	I0917 02:41:45.425869    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:45.425923    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:41:45.426708    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:41:45.426758    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:45.426773    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:45.426806    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:45.426819    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:45.426827    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:45.426843    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:45.426849    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:45.426858    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:45.426866    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:45.426872    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:45.426878    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:45.426888    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:45.426895    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:45.426902    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:45.426920    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:45.426935    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:45.426945    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:45.426953    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:45.426961    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:47.427845    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 11
	I0917 02:41:47.427856    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:47.427946    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:41:47.428706    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:41:47.428784    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:47.428797    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:47.428804    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:47.428812    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:47.428819    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:47.428824    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:47.428830    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:47.428836    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:47.428841    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:47.428854    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:47.428866    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:47.428878    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:47.428886    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:47.428893    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:47.428900    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:47.428912    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:47.428919    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:47.428926    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:47.428933    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:49.430895    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 12
	I0917 02:41:49.430916    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:49.430947    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:41:49.431736    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:41:49.431775    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:49.431801    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:49.431814    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:49.431825    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:49.431830    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:49.431837    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:49.431843    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:49.431856    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:49.431867    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:49.431884    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:49.431895    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:49.431902    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:49.431910    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:49.431917    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:49.431924    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:49.431931    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:49.431946    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:49.431953    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:49.431962    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:51.433217    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 13
	I0917 02:41:51.433229    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:51.433275    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:41:51.434075    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:41:51.434104    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:51.434112    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:51.434119    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:51.434125    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:51.434141    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:51.434154    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:51.434162    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:51.434169    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:51.434183    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:51.434194    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:51.434202    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:51.434210    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:51.434226    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:51.434242    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:51.434252    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:51.434260    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:51.434268    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:51.434275    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:51.434286    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:53.434744    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 14
	I0917 02:41:53.434757    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:53.434810    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:41:53.435577    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:41:53.435630    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:53.435638    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:53.435650    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:53.435660    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:53.435666    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:53.435672    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:53.435690    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:53.435700    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:53.435710    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:53.435717    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:53.435731    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:53.435739    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:53.435747    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:53.435754    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:53.435761    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:53.435769    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:53.435777    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:53.435785    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:53.435793    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:55.435810    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 15
	I0917 02:41:55.435826    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:55.435884    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:41:55.436641    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:41:55.436689    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:55.436698    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:55.436715    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:55.436726    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:55.436753    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:55.436771    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:55.436780    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:55.436801    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:55.436814    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:55.436824    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:55.436831    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:55.436838    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:55.436850    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:55.436860    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:55.436866    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:55.436877    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:55.436885    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:55.436900    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:55.436914    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:57.438895    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 16
	I0917 02:41:57.438907    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:57.438949    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:41:57.439702    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:41:57.439748    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:57.439756    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:57.439763    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:57.439777    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:57.439795    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:57.439808    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:57.439822    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:57.439831    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:57.439847    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:57.439855    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:57.439862    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:57.439867    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:57.439884    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:57.439895    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:57.439908    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:57.439917    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:57.439925    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:57.439932    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:57.439941    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:41:59.441924    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 17
	I0917 02:41:59.441935    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:41:59.441975    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:41:59.442760    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:41:59.442782    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:41:59.442791    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:41:59.442798    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:41:59.442806    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:41:59.442819    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:41:59.442837    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:41:59.442847    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:41:59.442854    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:41:59.442860    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:41:59.442866    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:41:59.442887    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:41:59.442898    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:41:59.442905    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:41:59.442914    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:41:59.442953    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:41:59.442963    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:41:59.442972    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:41:59.442980    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:41:59.442989    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:01.443830    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 18
	I0917 02:42:01.443844    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:01.443911    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:42:01.444715    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:42:01.444769    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:01.444776    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:01.444786    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:01.444791    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:01.444798    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:01.444809    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:01.444818    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:01.444825    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:01.444842    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:01.444857    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:01.444875    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:01.444888    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:01.444896    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:01.444909    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:01.444914    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:01.444922    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:01.444929    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:01.444936    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:01.444944    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:03.446999    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 19
	I0917 02:42:03.447013    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:03.447066    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:42:03.447968    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:42:03.448013    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:03.448028    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:03.448043    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:03.448063    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:03.448077    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:03.448103    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:03.448115    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:03.448123    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:03.448129    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:03.448136    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:03.448147    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:03.448154    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:03.448160    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:03.448166    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:03.448171    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:03.448179    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:03.448184    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:03.448189    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:03.448196    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:05.449658    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 20
	I0917 02:42:05.449671    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:05.449738    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:42:05.450509    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:42:05.450563    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:05.450574    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:05.450582    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:05.450588    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:05.450595    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:05.450601    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:05.450612    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:05.450619    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:05.450630    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:05.450640    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:05.450648    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:05.450655    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:05.450662    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:05.450669    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:05.450675    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:05.450683    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:05.450690    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:05.450698    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:05.450706    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:07.452238    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 21
	I0917 02:42:07.452250    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:07.452324    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:42:07.453066    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:42:07.453138    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:07.453148    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:07.453157    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:07.453165    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:07.453172    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:07.453177    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:07.453183    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:07.453196    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:07.453204    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:07.453212    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:07.453220    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:07.453227    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:07.453241    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:07.453247    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:07.453254    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:07.453262    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:07.453269    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:07.453276    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:07.453291    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:09.454968    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 22
	I0917 02:42:09.454981    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:09.454994    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:42:09.455770    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:42:09.455832    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:09.455844    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:09.455852    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:09.455860    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:09.455870    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:09.455877    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:09.455895    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:09.455906    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:09.455924    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:09.455933    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:09.455940    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:09.455947    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:09.455954    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:09.455961    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:09.455967    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:09.455974    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:09.455981    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:09.455986    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:09.456000    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:11.458015    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 23
	I0917 02:42:11.458030    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:11.458058    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:42:11.458832    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:42:11.458885    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:11.458900    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:11.458913    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:11.458919    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:11.458926    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:11.458933    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:11.458943    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:11.458954    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:11.458966    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:11.458974    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:11.458981    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:11.458994    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:11.459002    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:11.459009    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:11.459018    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:11.459024    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:11.459030    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:11.459035    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:11.459058    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:13.460325    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 24
	I0917 02:42:13.460336    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:13.460404    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:42:13.461233    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:42:13.461285    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:13.461307    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:13.461320    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:13.461328    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:13.461341    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:13.461349    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:13.461356    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:13.461377    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:13.461388    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:13.461396    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:13.461403    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:13.461410    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:13.461417    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:13.461425    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:13.461432    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:13.461440    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:13.461447    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:13.461454    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:13.461462    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:15.461859    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 25
	I0917 02:42:15.461874    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:15.461954    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:42:15.462814    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:42:15.462872    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:15.462891    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:15.462916    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:15.462936    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:15.462945    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:15.462954    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:15.462980    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:15.462991    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:15.462998    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:15.463006    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:15.463013    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:15.463020    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:15.463027    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:15.463032    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:15.463040    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:15.463047    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:15.463053    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:15.463066    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:15.463081    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:17.463435    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 26
	I0917 02:42:17.463447    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:17.463524    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:42:17.464297    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:42:17.464349    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:17.464361    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:17.464373    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:17.464381    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:17.464389    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:17.464395    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:17.464401    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:17.464407    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:17.464413    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:17.464421    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:17.464434    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:17.464453    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:17.464470    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:17.464482    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:17.464489    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:17.464501    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:17.464517    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:17.464530    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:17.464542    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:19.466524    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 27
	I0917 02:42:19.466539    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:19.466579    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:42:19.467332    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:42:19.467396    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:19.467406    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:19.467415    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:19.467422    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:19.467429    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:19.467438    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:19.467453    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:19.467463    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:19.467470    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:19.467478    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:19.467485    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:19.467492    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:19.467507    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:19.467519    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:19.467529    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:19.467538    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:19.467544    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:19.467552    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:19.467561    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:21.469614    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 28
	I0917 02:42:21.469629    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:21.469680    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:42:21.470546    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:42:21.470593    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:21.470604    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:21.470614    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:21.470620    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:21.470642    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:21.470653    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:21.470660    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:21.470668    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:21.470683    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:21.470695    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:21.470702    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:21.470722    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:21.470728    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:21.470735    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:21.470741    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:21.470760    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:21.470772    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:21.470787    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:21.470796    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:23.471248    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Attempt 29
	I0917 02:42:23.471262    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:42:23.471348    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | hyperkit pid from json: 6613
	I0917 02:42:23.472135    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Searching for 36:bd:ee:4f:b1:7b in /var/db/dhcpd_leases ...
	I0917 02:42:23.472161    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0917 02:42:23.472170    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:e9:71:cb:95:85 ID:1,6:e9:71:cb:95:85 Lease:0x66ea9f22}
	I0917 02:42:23.472194    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b6:aa:ae:c8:18:2 ID:1,b6:aa:ae:c8:18:2 Lease:0x66ea9e64}
	I0917 02:42:23.472208    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:9a:b7:c9:9:ef ID:1,42:9a:b7:c9:9:ef Lease:0x66e94c60}
	I0917 02:42:23.472216    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66ea9ce9}
	I0917 02:42:23.472228    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9d99}
	I0917 02:42:23.472242    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9d55}
	I0917 02:42:23.472257    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:92:40:aa:ed:4a:c1 ID:1,92:40:aa:ed:4a:c1 Lease:0x66e9497d}
	I0917 02:42:23.472265    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:8d:7a:e6:b2:bb ID:1,7e:8d:7a:e6:b2:bb Lease:0x66ea9ab8}
	I0917 02:42:23.472272    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:8e:5f:d:b8:74:93 ID:1,8e:5f:d:b8:74:93 Lease:0x66ea9a59}
	I0917 02:42:23.472281    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:d2:b4:f5:d1:aa:c6 ID:1,d2:b4:f5:d1:aa:c6 Lease:0x66ea9a29}
	I0917 02:42:23.472289    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:96:26:20:82:96:e ID:1,96:26:20:82:96:e Lease:0x66ea99b2}
	I0917 02:42:23.472313    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94897}
	I0917 02:42:23.472322    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:42:23.472332    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:42:23.472341    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:42:23.472353    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:2a:55:52:63:c1:5b ID:1,2a:55:52:63:c1:5b Lease:0x66ea95a9}
	I0917 02:42:23.472361    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:52:d3:62:d:e2:b9 ID:1,52:d3:62:d:e2:b9 Lease:0x66e94387}
	I0917 02:42:23.472375    6541 main.go:141] libmachine: (force-systemd-env-601000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:32:ad:62:91:12:32 ID:1,32:ad:62:91:12:32 Lease:0x66ea9178}
	I0917 02:42:25.472512    6541 client.go:171] duration metric: took 1m1.062832982s to LocalClient.Create
	I0917 02:42:27.473048    6541 start.go:128] duration metric: took 1m3.095269098s to createHost
	I0917 02:42:27.473062    6541 start.go:83] releasing machines lock for "force-systemd-env-601000", held for 1m3.095368155s
	W0917 02:42:27.473156    6541 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-601000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:bd:ee:4f:b1:7b
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-601000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:bd:ee:4f:b1:7b
	I0917 02:42:27.536366    6541 out.go:201] 
	W0917 02:42:27.557381    6541 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:bd:ee:4f:b1:7b
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:bd:ee:4f:b1:7b
	W0917 02:42:27.557395    6541 out.go:270] * 
	* 
	W0917 02:42:27.558078    6541 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:42:27.619426    6541 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-601000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-601000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-601000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (180.998127ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-env-601000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-601000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-17 02:42:27.910285 -0700 PDT m=+3894.030623043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-601000 -n force-systemd-env-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-601000 -n force-systemd-env-601000: exit status 7 (82.858821ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 02:42:27.991110    6643 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0917 02:42:27.991133    6643 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-601000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-env-601000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-601000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-601000: (5.262337122s)
--- FAIL: TestForceSystemdEnv (233.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (124.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-857000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-857000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-857000 -v=7 --alsologtostderr: (27.055896799s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-857000 --wait=true -v=7 --alsologtostderr
E0917 02:06:35.817025    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:06:43.014632    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-857000 --wait=true -v=7 --alsologtostderr: exit status 90 (1m34.332340421s)

                                                
                                                
-- stdout --
	* [ha-857000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-857000" primary control-plane node in "ha-857000" cluster
	* Restarting existing hyperkit VM for "ha-857000" ...
	* Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	* Enabled addons: 
	
	* Starting "ha-857000-m02" control-plane node in "ha-857000" cluster
	* Restarting existing hyperkit VM for "ha-857000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:06:03.640574    3951 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:06:03.641305    3951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:06:03.641314    3951 out.go:358] Setting ErrFile to fd 2...
	I0917 02:06:03.641320    3951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:06:03.641922    3951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:06:03.643438    3951 out.go:352] Setting JSON to false
	I0917 02:06:03.667323    3951 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2133,"bootTime":1726561830,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 02:06:03.667643    3951 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:06:03.689297    3951 out.go:177] * [ha-857000] minikube v1.34.0 on Darwin 14.6.1
	I0917 02:06:03.731193    3951 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:06:03.731279    3951 notify.go:220] Checking for updates...
	I0917 02:06:03.773863    3951 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:06:03.794994    3951 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 02:06:03.815992    3951 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:06:03.837103    3951 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:06:03.858226    3951 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:06:03.879788    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:03.879962    3951 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:06:03.880706    3951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:06:03.880768    3951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:06:03.890269    3951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51913
	I0917 02:06:03.890631    3951 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:06:03.891014    3951 main.go:141] libmachine: Using API Version  1
	I0917 02:06:03.891039    3951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:06:03.891290    3951 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:06:03.891417    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:03.920139    3951 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 02:06:03.941013    3951 start.go:297] selected driver: hyperkit
	I0917 02:06:03.941066    3951 start.go:901] validating driver "hyperkit" against &{Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:06:03.941369    3951 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:06:03.941551    3951 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:06:03.941770    3951 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19648-1025/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 02:06:03.951375    3951 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 02:06:03.956115    3951 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:06:03.956133    3951 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 02:06:03.959464    3951 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:06:03.959502    3951 cni.go:84] Creating CNI manager for ""
	I0917 02:06:03.959545    3951 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 02:06:03.959620    3951 start.go:340] cluster config:
	{Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:06:03.959742    3951 iso.go:125] acquiring lock: {Name:mkc407d80a0f8e78f0e24d63c464bb315c80139b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:06:04.002033    3951 out.go:177] * Starting "ha-857000" primary control-plane node in "ha-857000" cluster
	I0917 02:06:04.022857    3951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:06:04.022895    3951 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 02:06:04.022909    3951 cache.go:56] Caching tarball of preloaded images
	I0917 02:06:04.023022    3951 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:06:04.023030    3951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:06:04.023135    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:04.023618    3951 start.go:360] acquireMachinesLock for ha-857000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:06:04.023673    3951 start.go:364] duration metric: took 42.184µs to acquireMachinesLock for "ha-857000"
	I0917 02:06:04.023691    3951 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:06:04.023701    3951 fix.go:54] fixHost starting: 
	I0917 02:06:04.023937    3951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:06:04.023964    3951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:06:04.032560    3951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51915
	I0917 02:06:04.032902    3951 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:06:04.033222    3951 main.go:141] libmachine: Using API Version  1
	I0917 02:06:04.033234    3951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:06:04.033482    3951 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:06:04.033595    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:04.033680    3951 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:06:04.033773    3951 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:04.033830    3951 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 3402
	I0917 02:06:04.034740    3951 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid 3402 missing from process table
	I0917 02:06:04.034780    3951 fix.go:112] recreateIfNeeded on ha-857000: state=Stopped err=<nil>
	I0917 02:06:04.034806    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	W0917 02:06:04.034888    3951 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:06:04.077159    3951 out.go:177] * Restarting existing hyperkit VM for "ha-857000" ...
	I0917 02:06:04.097853    3951 main.go:141] libmachine: (ha-857000) Calling .Start
	I0917 02:06:04.098040    3951 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:04.098062    3951 main.go:141] libmachine: (ha-857000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid
	I0917 02:06:04.099681    3951 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid 3402 missing from process table
	I0917 02:06:04.099693    3951 main.go:141] libmachine: (ha-857000) DBG | pid 3402 is in state "Stopped"
	I0917 02:06:04.099713    3951 main.go:141] libmachine: (ha-857000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid...
	I0917 02:06:04.100071    3951 main.go:141] libmachine: (ha-857000) DBG | Using UUID f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c
	I0917 02:06:04.220854    3951 main.go:141] libmachine: (ha-857000) DBG | Generated MAC c2:63:2b:63:80:76
	I0917 02:06:04.220886    3951 main.go:141] libmachine: (ha-857000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:06:04.221000    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6870)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:06:04.221030    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6870)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:06:04.221075    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/ha-857000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:06:04.221122    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/ha-857000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:06:04.221130    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:06:04.222561    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: Pid is 3964
	I0917 02:06:04.222927    3951 main.go:141] libmachine: (ha-857000) DBG | Attempt 0
	I0917 02:06:04.222940    3951 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:04.222982    3951 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 3964
	I0917 02:06:04.224835    3951 main.go:141] libmachine: (ha-857000) DBG | Searching for c2:63:2b:63:80:76 in /var/db/dhcpd_leases ...
	I0917 02:06:04.224889    3951 main.go:141] libmachine: (ha-857000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:06:04.224918    3951 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94661}
	I0917 02:06:04.224931    3951 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea97be}
	I0917 02:06:04.224951    3951 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9724}
	I0917 02:06:04.224959    3951 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea96ad}
	I0917 02:06:04.224964    3951 main.go:141] libmachine: (ha-857000) DBG | Found match: c2:63:2b:63:80:76
	I0917 02:06:04.224968    3951 main.go:141] libmachine: (ha-857000) DBG | IP: 192.169.0.5
	I0917 02:06:04.225012    3951 main.go:141] libmachine: (ha-857000) Calling .GetConfigRaw
	I0917 02:06:04.225649    3951 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:06:04.225875    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:04.226292    3951 machine.go:93] provisionDockerMachine start ...
	I0917 02:06:04.226303    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:04.226417    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:04.226547    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:04.226663    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:04.226797    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:04.226907    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:04.227062    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:04.227266    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:04.227274    3951 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:06:04.230562    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:06:04.281228    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:06:04.281906    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:06:04.281925    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:06:04.281932    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:06:04.281939    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:06:04.662879    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:06:04.662893    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:06:04.777528    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:06:04.777548    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:06:04.777560    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:06:04.777595    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:06:04.778494    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:06:04.778504    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:06:10.382594    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:10 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 02:06:10.382613    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:10 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 02:06:10.382641    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:10 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 02:06:10.407226    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:10 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 02:06:15.292530    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:06:15.292580    3951 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:06:15.292726    3951 buildroot.go:166] provisioning hostname "ha-857000"
	I0917 02:06:15.292736    3951 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:06:15.292849    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.293003    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.293094    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.293188    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.293326    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.293545    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.293705    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.293713    3951 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000 && echo "ha-857000" | sudo tee /etc/hostname
	I0917 02:06:15.366591    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000
	
	I0917 02:06:15.366612    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.366751    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.366847    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.366940    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.367034    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.367186    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.367320    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.367331    3951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:06:15.430651    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:06:15.430671    3951 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:06:15.430688    3951 buildroot.go:174] setting up certificates
	I0917 02:06:15.430697    3951 provision.go:84] configureAuth start
	I0917 02:06:15.430705    3951 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:06:15.430833    3951 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:06:15.430948    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.431043    3951 provision.go:143] copyHostCerts
	I0917 02:06:15.431073    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:06:15.431127    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:06:15.431135    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:06:15.431279    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:06:15.431473    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:06:15.431502    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:06:15.431506    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:06:15.431572    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:06:15.431702    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:06:15.431739    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:06:15.431744    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:06:15.431808    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:06:15.431954    3951 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000 san=[127.0.0.1 192.169.0.5 ha-857000 localhost minikube]
	I0917 02:06:15.502156    3951 provision.go:177] copyRemoteCerts
	I0917 02:06:15.502214    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:06:15.502227    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.502353    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.502455    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.502537    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.502627    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:15.536073    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:06:15.536152    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:06:15.555893    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:06:15.555952    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 02:06:15.576096    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:06:15.576155    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 02:06:15.595956    3951 provision.go:87] duration metric: took 165.243542ms to configureAuth
	I0917 02:06:15.595981    3951 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:06:15.596163    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:15.596186    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:15.596327    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.596414    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.596502    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.596587    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.596672    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.596795    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.596928    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.596935    3951 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:06:15.651820    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:06:15.651831    3951 buildroot.go:70] root file system type: tmpfs
	I0917 02:06:15.651920    3951 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:06:15.651934    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.652065    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.652168    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.652259    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.652347    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.652479    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.652616    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.652659    3951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:06:15.717812    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:06:15.717834    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.717968    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.718062    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.718155    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.718250    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.718387    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.718524    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.718536    3951 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:06:17.394959    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:06:17.394973    3951 machine.go:96] duration metric: took 13.168443896s to provisionDockerMachine
	I0917 02:06:17.394995    3951 start.go:293] postStartSetup for "ha-857000" (driver="hyperkit")
	I0917 02:06:17.395004    3951 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:06:17.395018    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.395227    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:06:17.395243    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.395347    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.395465    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.395565    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.395656    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:17.438838    3951 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:06:17.443638    3951 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:06:17.443658    3951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:06:17.443750    3951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:06:17.443904    3951 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:06:17.443911    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:06:17.444089    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:06:17.451612    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:06:17.483402    3951 start.go:296] duration metric: took 88.39524ms for postStartSetup
	I0917 02:06:17.483429    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.483612    3951 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:06:17.483623    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.483710    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.483808    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.483897    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.483966    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:17.517140    3951 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:06:17.517209    3951 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:06:17.552751    3951 fix.go:56] duration metric: took 13.528816727s for fixHost
	I0917 02:06:17.552773    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.552913    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.553026    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.553112    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.553196    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.553326    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:17.553466    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:17.553473    3951 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:06:17.609371    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726563977.697638270
	
	I0917 02:06:17.609383    3951 fix.go:216] guest clock: 1726563977.697638270
	I0917 02:06:17.609388    3951 fix.go:229] Guest: 2024-09-17 02:06:17.69763827 -0700 PDT Remote: 2024-09-17 02:06:17.552764 -0700 PDT m=+13.948274598 (delta=144.87427ms)
	I0917 02:06:17.609406    3951 fix.go:200] guest clock delta is within tolerance: 144.87427ms
	I0917 02:06:17.609410    3951 start.go:83] releasing machines lock for "ha-857000", held for 13.585495629s
	I0917 02:06:17.609431    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.609563    3951 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:06:17.609665    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.609955    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.610053    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.610139    3951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:06:17.610168    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.610194    3951 ssh_runner.go:195] Run: cat /version.json
	I0917 02:06:17.610206    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.610247    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.610275    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.610357    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.610376    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.610500    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.610520    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.610600    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:17.610622    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:17.697769    3951 ssh_runner.go:195] Run: systemctl --version
	I0917 02:06:17.702709    3951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 02:06:17.706848    3951 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:06:17.706892    3951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:06:17.718886    3951 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:06:17.718900    3951 start.go:495] detecting cgroup driver to use...
	I0917 02:06:17.719004    3951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:06:17.737294    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:06:17.746145    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:06:17.754878    3951 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:06:17.754923    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:06:17.763740    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:06:17.772496    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:06:17.781224    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:06:17.790031    3951 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:06:17.799078    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:06:17.808154    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:06:17.817191    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:06:17.826325    3951 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:06:17.834538    3951 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:06:17.842770    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:17.944652    3951 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:06:17.962631    3951 start.go:495] detecting cgroup driver to use...
	I0917 02:06:17.962719    3951 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:06:17.974517    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:06:17.987421    3951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:06:18.001906    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:06:18.013186    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:06:18.024102    3951 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:06:18.045444    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:06:18.058849    3951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:06:18.073851    3951 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:06:18.076885    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:06:18.084040    3951 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:06:18.097595    3951 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:06:18.193717    3951 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:06:18.309886    3951 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:06:18.309951    3951 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:06:18.324367    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:18.418680    3951 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:06:20.733359    3951 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.314622452s)
	I0917 02:06:20.733433    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:06:20.744031    3951 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:06:20.756945    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:06:20.767405    3951 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:06:20.860682    3951 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:06:20.962907    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:21.070080    3951 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:06:21.083874    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:06:21.094971    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:21.190975    3951 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:06:21.258446    3951 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:06:21.258552    3951 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:06:21.262963    3951 start.go:563] Will wait 60s for crictl version
	I0917 02:06:21.263020    3951 ssh_runner.go:195] Run: which crictl
	I0917 02:06:21.266695    3951 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:06:21.293648    3951 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:06:21.293750    3951 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:06:21.309528    3951 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:06:21.349115    3951 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:06:21.349164    3951 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:06:21.349574    3951 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:06:21.354153    3951 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:06:21.363705    3951 kubeadm.go:883] updating cluster {Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 02:06:21.363793    3951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:06:21.363866    3951 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:06:21.378216    3951 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:06:21.378227    3951 docker.go:615] Images already preloaded, skipping extraction
	I0917 02:06:21.378310    3951 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:06:21.394015    3951 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:06:21.394037    3951 cache_images.go:84] Images are preloaded, skipping loading
	I0917 02:06:21.394050    3951 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0917 02:06:21.394124    3951 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:06:21.394209    3951 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 02:06:21.429497    3951 cni.go:84] Creating CNI manager for ""
	I0917 02:06:21.429509    3951 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 02:06:21.429523    3951 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 02:06:21.429538    3951 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-857000 NodeName:ha-857000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 02:06:21.429624    3951 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-857000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 02:06:21.429636    3951 kube-vip.go:115] generating kube-vip config ...
	I0917 02:06:21.429694    3951 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 02:06:21.442428    3951 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 02:06:21.442505    3951 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 02:06:21.442559    3951 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:06:21.451375    3951 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:06:21.451431    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 02:06:21.459648    3951 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0917 02:06:21.473122    3951 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:06:21.487014    3951 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0917 02:06:21.500992    3951 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 02:06:21.514562    3951 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:06:21.517444    3951 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:06:21.527518    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:21.625140    3951 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:06:21.639257    3951 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.5
	I0917 02:06:21.639269    3951 certs.go:194] generating shared ca certs ...
	I0917 02:06:21.639280    3951 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:21.639439    3951 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:06:21.639492    3951 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:06:21.639503    3951 certs.go:256] generating profile certs ...
	I0917 02:06:21.639592    3951 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key
	I0917 02:06:21.639611    3951 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea
	I0917 02:06:21.639646    3951 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt.34d356ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0917 02:06:21.706715    3951 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt.34d356ea ...
	I0917 02:06:21.706729    3951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt.34d356ea: {Name:mk3f381e64586a5cdd027dc403cd38b58de19cdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:21.707284    3951 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea ...
	I0917 02:06:21.707298    3951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea: {Name:mk7ad610a632f0df99198e2c9491ed57c1c9afa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:21.707543    3951 certs.go:381] copying /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt.34d356ea -> /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt
	I0917 02:06:21.707724    3951 certs.go:385] copying /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea -> /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key
	I0917 02:06:21.707940    3951 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key
	I0917 02:06:21.707949    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:06:21.707971    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:06:21.707989    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:06:21.708013    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:06:21.708032    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 02:06:21.708050    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 02:06:21.708068    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 02:06:21.708087    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 02:06:21.708175    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:06:21.708221    3951 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:06:21.708229    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:06:21.708259    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:06:21.708290    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:06:21.708317    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:06:21.708378    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:06:21.708413    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:06:21.708433    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:06:21.708450    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:06:21.708936    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:06:21.731634    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:06:21.755014    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:06:21.780416    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:06:21.806686    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 02:06:21.830924    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 02:06:21.860517    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:06:21.883515    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 02:06:21.904458    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:06:21.940425    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:06:21.977748    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:06:22.031404    3951 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 02:06:22.066177    3951 ssh_runner.go:195] Run: openssl version
	I0917 02:06:22.070562    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:06:22.079011    3951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:06:22.082434    3951 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:06:22.082478    3951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:06:22.087894    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:06:22.096140    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:06:22.104466    3951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:06:22.107959    3951 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:06:22.107997    3951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:06:22.112345    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:06:22.120730    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:06:22.129508    3951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:06:22.133071    3951 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:06:22.133113    3951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:06:22.137371    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:06:22.145685    3951 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:06:22.149175    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:06:22.154152    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:06:22.158697    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:06:22.163258    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:06:22.167625    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:06:22.172054    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:06:22.176282    3951 kubeadm.go:392] StartCluster: {Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:06:22.176426    3951 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 02:06:22.189260    3951 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 02:06:22.196889    3951 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 02:06:22.196900    3951 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 02:06:22.196943    3951 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 02:06:22.204529    3951 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:06:22.204834    3951 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-857000" does not appear in /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:06:22.204922    3951 kubeconfig.go:62] /Users/jenkins/minikube-integration/19648-1025/kubeconfig needs updating (will repair): [kubeconfig missing "ha-857000" cluster setting kubeconfig missing "ha-857000" context setting]
	I0917 02:06:22.205131    3951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:22.205534    3951 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:06:22.205732    3951 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3bff720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:06:22.206065    3951 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 02:06:22.206259    3951 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 02:06:22.213498    3951 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0917 02:06:22.213509    3951 kubeadm.go:597] duration metric: took 16.605221ms to restartPrimaryControlPlane
	I0917 02:06:22.213515    3951 kubeadm.go:394] duration metric: took 37.238807ms to StartCluster
	I0917 02:06:22.213523    3951 settings.go:142] acquiring lock: {Name:mk5de70c670a23cf49fdbd19ddb5c1d2a9de9a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:22.213600    3951 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:06:22.213968    3951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:22.214179    3951 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:06:22.214192    3951 start.go:241] waiting for startup goroutines ...
	I0917 02:06:22.214206    3951 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 02:06:22.214324    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:22.256356    3951 out.go:177] * Enabled addons: 
	I0917 02:06:22.277195    3951 addons.go:510] duration metric: took 62.984897ms for enable addons: enabled=[]
	I0917 02:06:22.277282    3951 start.go:246] waiting for cluster config update ...
	I0917 02:06:22.277295    3951 start.go:255] writing updated cluster config ...
	I0917 02:06:22.300377    3951 out.go:201] 
	I0917 02:06:22.321646    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:22.321775    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:22.343932    3951 out.go:177] * Starting "ha-857000-m02" control-plane node in "ha-857000" cluster
	I0917 02:06:22.386310    3951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:06:22.386345    3951 cache.go:56] Caching tarball of preloaded images
	I0917 02:06:22.386520    3951 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:06:22.386539    3951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:06:22.386678    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:22.387665    3951 start.go:360] acquireMachinesLock for ha-857000-m02: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:06:22.387781    3951 start.go:364] duration metric: took 93.188µs to acquireMachinesLock for "ha-857000-m02"
	I0917 02:06:22.387807    3951 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:06:22.387815    3951 fix.go:54] fixHost starting: m02
	I0917 02:06:22.388245    3951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:06:22.388280    3951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:06:22.397656    3951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51937
	I0917 02:06:22.397993    3951 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:06:22.398338    3951 main.go:141] libmachine: Using API Version  1
	I0917 02:06:22.398355    3951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:06:22.398604    3951 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:06:22.398732    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:22.398839    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetState
	I0917 02:06:22.398926    3951 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:22.398995    3951 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 3905
	I0917 02:06:22.399925    3951 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid 3905 missing from process table
	I0917 02:06:22.399987    3951 fix.go:112] recreateIfNeeded on ha-857000-m02: state=Stopped err=<nil>
	I0917 02:06:22.400002    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	W0917 02:06:22.400097    3951 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:06:22.442146    3951 out.go:177] * Restarting existing hyperkit VM for "ha-857000-m02" ...
	I0917 02:06:22.463239    3951 main.go:141] libmachine: (ha-857000-m02) Calling .Start
	I0917 02:06:22.463548    3951 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:22.463605    3951 main.go:141] libmachine: (ha-857000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid
	I0917 02:06:22.465343    3951 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid 3905 missing from process table
	I0917 02:06:22.465354    3951 main.go:141] libmachine: (ha-857000-m02) DBG | pid 3905 is in state "Stopped"
	I0917 02:06:22.465372    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid...
	I0917 02:06:22.465746    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Using UUID 1940a045-0b1b-4b28-842d-d4858a62cbd3
	I0917 02:06:22.495548    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Generated MAC 9a:95:4e:4b:65:fe
	I0917 02:06:22.495583    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:06:22.495857    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1940a045-0b1b-4b28-842d-d4858a62cbd3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000383560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:06:22.495910    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1940a045-0b1b-4b28-842d-d4858a62cbd3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000383560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:06:22.496018    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1940a045-0b1b-4b28-842d-d4858a62cbd3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/ha-857000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machine
s/ha-857000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:06:22.496120    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1940a045-0b1b-4b28-842d-d4858a62cbd3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/ha-857000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:06:22.496143    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:06:22.497973    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: Pid is 3976
	I0917 02:06:22.498454    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Attempt 0
	I0917 02:06:22.498484    3951 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:22.498545    3951 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 3976
	I0917 02:06:22.500282    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Searching for 9a:95:4e:4b:65:fe in /var/db/dhcpd_leases ...
	I0917 02:06:22.500349    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:06:22.500372    3951 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea9805}
	I0917 02:06:22.500382    3951 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94661}
	I0917 02:06:22.500397    3951 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea97be}
	I0917 02:06:22.500410    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Found match: 9a:95:4e:4b:65:fe
	I0917 02:06:22.500437    3951 main.go:141] libmachine: (ha-857000-m02) DBG | IP: 192.169.0.6
	I0917 02:06:22.500486    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetConfigRaw
	I0917 02:06:22.501123    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:06:22.501362    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:22.501877    3951 machine.go:93] provisionDockerMachine start ...
	I0917 02:06:22.501887    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:22.502006    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:22.502140    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:22.502253    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:22.502355    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:22.502453    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:22.502592    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:22.502794    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:22.502804    3951 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:06:22.506011    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:06:22.516718    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:06:22.517536    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:06:22.517559    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:06:22.517587    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:06:22.517605    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:06:22.902525    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:06:22.902540    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:06:23.017245    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:06:23.017263    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:06:23.017272    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:06:23.017286    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:06:23.018137    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:06:23.018146    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:06:28.664665    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:06:28.664731    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:06:28.664739    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:06:28.688834    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:06:33.560885    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:06:33.560902    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:06:33.561080    3951 buildroot.go:166] provisioning hostname "ha-857000-m02"
	I0917 02:06:33.561088    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:06:33.561176    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.561264    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.561361    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.561457    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.561572    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.561724    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.561884    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.561894    3951 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000-m02 && echo "ha-857000-m02" | sudo tee /etc/hostname
	I0917 02:06:33.626435    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000-m02
	
	I0917 02:06:33.626450    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.626583    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.626692    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.626783    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.626875    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.627027    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.627173    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.627184    3951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:06:33.685124    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:06:33.685140    3951 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:06:33.685149    3951 buildroot.go:174] setting up certificates
	I0917 02:06:33.685155    3951 provision.go:84] configureAuth start
	I0917 02:06:33.685161    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:06:33.685285    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:06:33.685391    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.685472    3951 provision.go:143] copyHostCerts
	I0917 02:06:33.685505    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:06:33.685552    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:06:33.685558    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:06:33.685701    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:06:33.686213    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:06:33.686248    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:06:33.686252    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:06:33.686328    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:06:33.686464    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:06:33.686504    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:06:33.686509    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:06:33.686577    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:06:33.686713    3951 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000-m02 san=[127.0.0.1 192.169.0.6 ha-857000-m02 localhost minikube]
	I0917 02:06:33.724325    3951 provision.go:177] copyRemoteCerts
	I0917 02:06:33.724374    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:06:33.724388    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.724531    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.724628    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.724718    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.724808    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:06:33.757977    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:06:33.758053    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 02:06:33.777137    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:06:33.777203    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:06:33.796184    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:06:33.796248    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:06:33.815739    3951 provision.go:87] duration metric: took 130.575095ms to configureAuth
	I0917 02:06:33.815753    3951 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:06:33.815923    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:33.815937    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:33.816066    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.816180    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.816266    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.816357    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.816435    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.816546    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.816672    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.816679    3951 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:06:33.868528    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:06:33.868540    3951 buildroot.go:70] root file system type: tmpfs
	I0917 02:06:33.868626    3951 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:06:33.868638    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.868774    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.868862    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.868957    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.869038    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.869178    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.869313    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.869355    3951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:06:33.934180    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:06:33.934199    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.934331    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.934438    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.934537    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.934624    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.934753    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.934890    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.934902    3951 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:06:35.613474    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:06:35.613490    3951 machine.go:96] duration metric: took 13.111377814s to provisionDockerMachine
	I0917 02:06:35.613498    3951 start.go:293] postStartSetup for "ha-857000-m02" (driver="hyperkit")
	I0917 02:06:35.613517    3951 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:06:35.613531    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.613729    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:06:35.613743    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:35.613853    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.613946    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.614026    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.614114    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:06:35.652452    3951 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:06:35.656174    3951 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:06:35.656186    3951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:06:35.656273    3951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:06:35.656413    3951 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:06:35.656420    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:06:35.656581    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:06:35.665638    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:06:35.696288    3951 start.go:296] duration metric: took 82.770634ms for postStartSetup
	I0917 02:06:35.696319    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.696511    3951 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:06:35.696525    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:35.696625    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.696706    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.696794    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.696893    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:06:35.729642    3951 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:06:35.729708    3951 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:06:35.783199    3951 fix.go:56] duration metric: took 13.395150311s for fixHost
	I0917 02:06:35.783224    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:35.783375    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.783476    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.783551    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.783631    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.783768    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:35.783899    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:35.783906    3951 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:06:35.838274    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726563995.926909320
	
	I0917 02:06:35.838288    3951 fix.go:216] guest clock: 1726563995.926909320
	I0917 02:06:35.838293    3951 fix.go:229] Guest: 2024-09-17 02:06:35.92690932 -0700 PDT Remote: 2024-09-17 02:06:35.783213 -0700 PDT m=+32.178408818 (delta=143.69632ms)
	I0917 02:06:35.838302    3951 fix.go:200] guest clock delta is within tolerance: 143.69632ms
	I0917 02:06:35.838306    3951 start.go:83] releasing machines lock for "ha-857000-m02", held for 13.450280733s
	I0917 02:06:35.838324    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.838459    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:06:35.861800    3951 out.go:177] * Found network options:
	I0917 02:06:35.882860    3951 out.go:177]   - NO_PROXY=192.169.0.5
	W0917 02:06:35.903716    3951 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:06:35.903755    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.904608    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.904879    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.905023    3951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:06:35.905064    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	W0917 02:06:35.905084    3951 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:06:35.905192    3951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:06:35.905211    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:35.905229    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.905436    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.905470    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.905665    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.905679    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.905849    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.905865    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:06:35.905991    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	W0917 02:06:35.936887    3951 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:06:35.936958    3951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:06:36.007933    3951 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:06:36.007953    3951 start.go:495] detecting cgroup driver to use...
	I0917 02:06:36.008056    3951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:06:36.024338    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:06:36.033262    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:06:36.042136    3951 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:06:36.042188    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:06:36.050818    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:06:36.059619    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:06:36.068394    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:06:36.077285    3951 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:06:36.086317    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:06:36.094948    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:06:36.103691    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:06:36.112538    3951 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:06:36.120508    3951 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:06:36.128434    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:36.230022    3951 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:06:36.250428    3951 start.go:495] detecting cgroup driver to use...
	I0917 02:06:36.250505    3951 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:06:36.273190    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:06:36.285496    3951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:06:36.303235    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:06:36.314994    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:06:36.325990    3951 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:06:36.351133    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:06:36.362290    3951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:06:36.377230    3951 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:06:36.380093    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:06:36.387911    3951 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:06:36.401199    3951 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:06:36.507714    3951 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:06:36.609258    3951 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:06:36.609285    3951 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:06:36.623332    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:36.718880    3951 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:07:37.748739    3951 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.028781405s)
	I0917 02:07:37.748815    3951 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0917 02:07:37.786000    3951 out.go:201] 
	W0917 02:07:37.809190    3951 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 09:06:34 ha-857000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 09:06:34 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:34.324120961Z" level=info msg="Starting up"
	Sep 17 09:06:34 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:34.324775253Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 09:06:34 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:34.325518826Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=488
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.341058185Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356213648Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356261078Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356303349Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356313782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356436154Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356475371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356593098Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356628148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356640458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356648167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356767218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356926440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358525862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358564683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358679405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358712925Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358797431Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358843725Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.360911977Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.360974504Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361053471Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361068314Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361078324Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361121426Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361365784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361471567Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361506271Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361517719Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361527110Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361535526Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361543621Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361552701Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361562674Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361570939Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361578985Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361588503Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361603316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361612406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361620269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361628602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361638647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361646859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361654306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361662885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361671295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361681400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361690597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361698250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361705966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361720758Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361737654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361746364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361754112Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361847279Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361861726Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361869503Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361877991Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361885443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361899338Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361911740Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362480967Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362549430Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362632268Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362920029Z" level=info msg="containerd successfully booted in 0.022632s"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.344850604Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.385337180Z" level=info msg="Loading containers: start."
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.568192740Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.627785197Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.670471622Z" level=info msg="Loading containers: done."
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.677239663Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.677408183Z" level=info msg="Daemon has completed initialization"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.699597178Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 09:06:35 ha-857000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.704823863Z" level=info msg="API listen on [::]:2376"
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.821530126Z" level=info msg="Processing signal 'terminated'"
	Sep 17 09:06:36 ha-857000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.822577679Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.823011519Z" level=info msg="Daemon shutdown complete"
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.823037716Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.823053677Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 09:06:37 ha-857000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 09:06:37 ha-857000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 09:06:37 ha-857000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 09:06:37 ha-857000-m02 dockerd[1158]: time="2024-09-17T09:06:37.864990112Z" level=info msg="Starting up"
	Sep 17 09:07:37 ha-857000-m02 dockerd[1158]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 09:07:37 ha-857000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 09:07:37 ha-857000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 09:07:37 ha-857000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 09:06:34 ha-857000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 09:06:34 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:34.324120961Z" level=info msg="Starting up"
	Sep 17 09:06:34 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:34.324775253Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 09:06:34 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:34.325518826Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=488
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.341058185Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356213648Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356261078Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356303349Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356313782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356436154Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356475371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356593098Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356628148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356640458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356648167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356767218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356926440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358525862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358564683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358679405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358712925Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358797431Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358843725Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.360911977Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.360974504Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361053471Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361068314Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361078324Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361121426Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361365784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361471567Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361506271Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361517719Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361527110Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361535526Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361543621Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361552701Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361562674Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361570939Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361578985Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361588503Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361603316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361612406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361620269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361628602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361638647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361646859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361654306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361662885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361671295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361681400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361690597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361698250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361705966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361720758Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361737654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361746364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361754112Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361847279Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361861726Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361869503Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361877991Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361885443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361899338Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361911740Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362480967Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362549430Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362632268Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362920029Z" level=info msg="containerd successfully booted in 0.022632s"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.344850604Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.385337180Z" level=info msg="Loading containers: start."
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.568192740Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.627785197Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.670471622Z" level=info msg="Loading containers: done."
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.677239663Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.677408183Z" level=info msg="Daemon has completed initialization"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.699597178Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 09:06:35 ha-857000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.704823863Z" level=info msg="API listen on [::]:2376"
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.821530126Z" level=info msg="Processing signal 'terminated'"
	Sep 17 09:06:36 ha-857000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.822577679Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.823011519Z" level=info msg="Daemon shutdown complete"
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.823037716Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.823053677Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 09:06:37 ha-857000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 09:06:37 ha-857000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 09:06:37 ha-857000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 09:06:37 ha-857000-m02 dockerd[1158]: time="2024-09-17T09:06:37.864990112Z" level=info msg="Starting up"
	Sep 17 09:07:37 ha-857000-m02 dockerd[1158]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 09:07:37 ha-857000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 09:07:37 ha-857000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 09:07:37 ha-857000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0917 02:07:37.809292    3951 out.go:270] * 
	* 
	W0917 02:07:37.810458    3951 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:07:37.874286    3951 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p ha-857000 -v=7 --alsologtostderr" : exit status 90
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-857000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-857000 -n ha-857000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-857000 -n ha-857000: exit status 2 (147.505284ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-857000 logs -n 25: (2.198888713s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-857000 cp ha-857000-m03:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m02:/home/docker/cp-test_ha-857000-m03_ha-857000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m02 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m03_ha-857000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m03:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04:/home/docker/cp-test_ha-857000-m03_ha-857000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m04 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m03_ha-857000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-857000 cp testdata/cp-test.txt                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3490570692/001/cp-test_ha-857000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000:/home/docker/cp-test_ha-857000-m04_ha-857000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000 sudo cat                                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m02:/home/docker/cp-test_ha-857000-m04_ha-857000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m02 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m03:/home/docker/cp-test_ha-857000-m04_ha-857000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m03 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-857000 node stop m02 -v=7                                                                                                 | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-857000 node start m02 -v=7                                                                                                | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:05 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-857000 -v=7                                                                                                       | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:05 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-857000 -v=7                                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:05 PDT | 17 Sep 24 02:06 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-857000 --wait=true -v=7                                                                                                | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:06 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-857000                                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:07 PDT |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 02:06:03
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 02:06:03.640574    3951 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:06:03.641305    3951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:06:03.641314    3951 out.go:358] Setting ErrFile to fd 2...
	I0917 02:06:03.641320    3951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:06:03.641922    3951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:06:03.643438    3951 out.go:352] Setting JSON to false
	I0917 02:06:03.667323    3951 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2133,"bootTime":1726561830,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 02:06:03.667643    3951 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:06:03.689297    3951 out.go:177] * [ha-857000] minikube v1.34.0 on Darwin 14.6.1
	I0917 02:06:03.731193    3951 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:06:03.731279    3951 notify.go:220] Checking for updates...
	I0917 02:06:03.773863    3951 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:06:03.794994    3951 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 02:06:03.815992    3951 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:06:03.837103    3951 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:06:03.858226    3951 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:06:03.879788    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:03.879962    3951 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:06:03.880706    3951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:06:03.880768    3951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:06:03.890269    3951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51913
	I0917 02:06:03.890631    3951 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:06:03.891014    3951 main.go:141] libmachine: Using API Version  1
	I0917 02:06:03.891039    3951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:06:03.891290    3951 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:06:03.891417    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:03.920139    3951 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 02:06:03.941013    3951 start.go:297] selected driver: hyperkit
	I0917 02:06:03.941066    3951 start.go:901] validating driver "hyperkit" against &{Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:06:03.941369    3951 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:06:03.941551    3951 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:06:03.941770    3951 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19648-1025/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 02:06:03.951375    3951 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 02:06:03.956115    3951 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:06:03.956133    3951 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 02:06:03.959464    3951 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:06:03.959502    3951 cni.go:84] Creating CNI manager for ""
	I0917 02:06:03.959545    3951 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 02:06:03.959620    3951 start.go:340] cluster config:
	{Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:06:03.959742    3951 iso.go:125] acquiring lock: {Name:mkc407d80a0f8e78f0e24d63c464bb315c80139b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:06:04.002033    3951 out.go:177] * Starting "ha-857000" primary control-plane node in "ha-857000" cluster
	I0917 02:06:04.022857    3951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:06:04.022895    3951 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 02:06:04.022909    3951 cache.go:56] Caching tarball of preloaded images
	I0917 02:06:04.023022    3951 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:06:04.023030    3951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:06:04.023135    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:04.023618    3951 start.go:360] acquireMachinesLock for ha-857000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:06:04.023673    3951 start.go:364] duration metric: took 42.184µs to acquireMachinesLock for "ha-857000"
	I0917 02:06:04.023691    3951 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:06:04.023701    3951 fix.go:54] fixHost starting: 
	I0917 02:06:04.023937    3951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:06:04.023964    3951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:06:04.032560    3951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51915
	I0917 02:06:04.032902    3951 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:06:04.033222    3951 main.go:141] libmachine: Using API Version  1
	I0917 02:06:04.033234    3951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:06:04.033482    3951 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:06:04.033595    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:04.033680    3951 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:06:04.033773    3951 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:04.033830    3951 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 3402
	I0917 02:06:04.034740    3951 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid 3402 missing from process table
	I0917 02:06:04.034780    3951 fix.go:112] recreateIfNeeded on ha-857000: state=Stopped err=<nil>
	I0917 02:06:04.034806    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	W0917 02:06:04.034888    3951 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:06:04.077159    3951 out.go:177] * Restarting existing hyperkit VM for "ha-857000" ...
	I0917 02:06:04.097853    3951 main.go:141] libmachine: (ha-857000) Calling .Start
	I0917 02:06:04.098040    3951 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:04.098062    3951 main.go:141] libmachine: (ha-857000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid
	I0917 02:06:04.099681    3951 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid 3402 missing from process table
	I0917 02:06:04.099693    3951 main.go:141] libmachine: (ha-857000) DBG | pid 3402 is in state "Stopped"
	I0917 02:06:04.099713    3951 main.go:141] libmachine: (ha-857000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid...
	I0917 02:06:04.100071    3951 main.go:141] libmachine: (ha-857000) DBG | Using UUID f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c
	I0917 02:06:04.220854    3951 main.go:141] libmachine: (ha-857000) DBG | Generated MAC c2:63:2b:63:80:76
	I0917 02:06:04.220886    3951 main.go:141] libmachine: (ha-857000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:06:04.221000    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6870)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:06:04.221030    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6870)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:06:04.221075    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/ha-857000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:06:04.221122    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/ha-857000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:06:04.221130    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:06:04.222561    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: Pid is 3964
	I0917 02:06:04.222927    3951 main.go:141] libmachine: (ha-857000) DBG | Attempt 0
	I0917 02:06:04.222940    3951 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:04.222982    3951 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 3964
	I0917 02:06:04.224835    3951 main.go:141] libmachine: (ha-857000) DBG | Searching for c2:63:2b:63:80:76 in /var/db/dhcpd_leases ...
	I0917 02:06:04.224889    3951 main.go:141] libmachine: (ha-857000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:06:04.224918    3951 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94661}
	I0917 02:06:04.224931    3951 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea97be}
	I0917 02:06:04.224951    3951 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9724}
	I0917 02:06:04.224959    3951 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea96ad}
	I0917 02:06:04.224964    3951 main.go:141] libmachine: (ha-857000) DBG | Found match: c2:63:2b:63:80:76
	I0917 02:06:04.224968    3951 main.go:141] libmachine: (ha-857000) DBG | IP: 192.169.0.5
	I0917 02:06:04.225012    3951 main.go:141] libmachine: (ha-857000) Calling .GetConfigRaw
	I0917 02:06:04.225649    3951 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:06:04.225875    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:04.226292    3951 machine.go:93] provisionDockerMachine start ...
	I0917 02:06:04.226303    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:04.226417    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:04.226547    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:04.226663    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:04.226797    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:04.226907    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:04.227062    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:04.227266    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:04.227274    3951 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:06:04.230562    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:06:04.281228    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:06:04.281906    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:06:04.281925    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:06:04.281932    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:06:04.281939    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:06:04.662879    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:06:04.662893    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:06:04.777528    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:06:04.777548    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:06:04.777560    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:06:04.777595    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:06:04.778494    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:06:04.778504    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:06:10.382594    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:10 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 02:06:10.382613    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:10 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 02:06:10.382641    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:10 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 02:06:10.407226    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:10 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 02:06:15.292530    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:06:15.292580    3951 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:06:15.292726    3951 buildroot.go:166] provisioning hostname "ha-857000"
	I0917 02:06:15.292736    3951 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:06:15.292849    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.293003    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.293094    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.293188    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.293326    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.293545    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.293705    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.293713    3951 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000 && echo "ha-857000" | sudo tee /etc/hostname
	I0917 02:06:15.366591    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000
	
	I0917 02:06:15.366612    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.366751    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.366847    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.366940    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.367034    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.367186    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.367320    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.367331    3951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:06:15.430651    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:06:15.430671    3951 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:06:15.430688    3951 buildroot.go:174] setting up certificates
	I0917 02:06:15.430697    3951 provision.go:84] configureAuth start
	I0917 02:06:15.430705    3951 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:06:15.430833    3951 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:06:15.430948    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.431043    3951 provision.go:143] copyHostCerts
	I0917 02:06:15.431073    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:06:15.431127    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:06:15.431135    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:06:15.431279    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:06:15.431473    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:06:15.431502    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:06:15.431506    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:06:15.431572    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:06:15.431702    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:06:15.431739    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:06:15.431744    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:06:15.431808    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:06:15.431954    3951 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000 san=[127.0.0.1 192.169.0.5 ha-857000 localhost minikube]
	I0917 02:06:15.502156    3951 provision.go:177] copyRemoteCerts
	I0917 02:06:15.502214    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:06:15.502227    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.502353    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.502455    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.502537    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.502627    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:15.536073    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:06:15.536152    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:06:15.555893    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:06:15.555952    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 02:06:15.576096    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:06:15.576155    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 02:06:15.595956    3951 provision.go:87] duration metric: took 165.243542ms to configureAuth
	I0917 02:06:15.595981    3951 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:06:15.596163    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:15.596186    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:15.596327    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.596414    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.596502    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.596587    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.596672    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.596795    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.596928    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.596935    3951 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:06:15.651820    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:06:15.651831    3951 buildroot.go:70] root file system type: tmpfs
	I0917 02:06:15.651920    3951 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:06:15.651934    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.652065    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.652168    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.652259    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.652347    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.652479    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.652616    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.652659    3951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:06:15.717812    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:06:15.717834    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.717968    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.718062    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.718155    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.718250    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.718387    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.718524    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.718536    3951 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:06:17.394959    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:06:17.394973    3951 machine.go:96] duration metric: took 13.168443896s to provisionDockerMachine
	I0917 02:06:17.394995    3951 start.go:293] postStartSetup for "ha-857000" (driver="hyperkit")
	I0917 02:06:17.395004    3951 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:06:17.395018    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.395227    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:06:17.395243    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.395347    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.395465    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.395565    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.395656    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:17.438838    3951 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:06:17.443638    3951 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:06:17.443658    3951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:06:17.443750    3951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:06:17.443904    3951 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:06:17.443911    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:06:17.444089    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:06:17.451612    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:06:17.483402    3951 start.go:296] duration metric: took 88.39524ms for postStartSetup
	I0917 02:06:17.483429    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.483612    3951 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:06:17.483623    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.483710    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.483808    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.483897    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.483966    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:17.517140    3951 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:06:17.517209    3951 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:06:17.552751    3951 fix.go:56] duration metric: took 13.528816727s for fixHost
	I0917 02:06:17.552773    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.552913    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.553026    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.553112    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.553196    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.553326    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:17.553466    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:17.553473    3951 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:06:17.609371    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726563977.697638270
	
	I0917 02:06:17.609383    3951 fix.go:216] guest clock: 1726563977.697638270
	I0917 02:06:17.609388    3951 fix.go:229] Guest: 2024-09-17 02:06:17.69763827 -0700 PDT Remote: 2024-09-17 02:06:17.552764 -0700 PDT m=+13.948274598 (delta=144.87427ms)
	I0917 02:06:17.609406    3951 fix.go:200] guest clock delta is within tolerance: 144.87427ms
	I0917 02:06:17.609410    3951 start.go:83] releasing machines lock for "ha-857000", held for 13.585495629s
	I0917 02:06:17.609431    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.609563    3951 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:06:17.609665    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.609955    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.610053    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.610139    3951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:06:17.610168    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.610194    3951 ssh_runner.go:195] Run: cat /version.json
	I0917 02:06:17.610206    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.610247    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.610275    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.610357    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.610376    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.610500    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.610520    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.610600    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:17.610622    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:17.697769    3951 ssh_runner.go:195] Run: systemctl --version
	I0917 02:06:17.702709    3951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 02:06:17.706848    3951 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:06:17.706892    3951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:06:17.718886    3951 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:06:17.718900    3951 start.go:495] detecting cgroup driver to use...
	I0917 02:06:17.719004    3951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:06:17.737294    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:06:17.746145    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:06:17.754878    3951 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:06:17.754923    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:06:17.763740    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:06:17.772496    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:06:17.781224    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:06:17.790031    3951 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:06:17.799078    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:06:17.808154    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:06:17.817191    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:06:17.826325    3951 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:06:17.834538    3951 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:06:17.842770    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:17.944652    3951 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:06:17.962631    3951 start.go:495] detecting cgroup driver to use...
	I0917 02:06:17.962719    3951 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:06:17.974517    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:06:17.987421    3951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:06:18.001906    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:06:18.013186    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:06:18.024102    3951 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:06:18.045444    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:06:18.058849    3951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:06:18.073851    3951 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:06:18.076885    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:06:18.084040    3951 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:06:18.097595    3951 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:06:18.193717    3951 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:06:18.309886    3951 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:06:18.309951    3951 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:06:18.324367    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:18.418680    3951 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:06:20.733359    3951 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.314622452s)
	I0917 02:06:20.733433    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:06:20.744031    3951 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:06:20.756945    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:06:20.767405    3951 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:06:20.860682    3951 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:06:20.962907    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:21.070080    3951 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:06:21.083874    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:06:21.094971    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:21.190975    3951 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:06:21.258446    3951 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:06:21.258552    3951 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:06:21.262963    3951 start.go:563] Will wait 60s for crictl version
	I0917 02:06:21.263020    3951 ssh_runner.go:195] Run: which crictl
	I0917 02:06:21.266695    3951 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:06:21.293648    3951 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:06:21.293750    3951 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:06:21.309528    3951 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:06:21.349115    3951 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:06:21.349164    3951 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:06:21.349574    3951 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:06:21.354153    3951 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:06:21.363705    3951 kubeadm.go:883] updating cluster {Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 02:06:21.363793    3951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:06:21.363866    3951 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:06:21.378216    3951 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:06:21.378227    3951 docker.go:615] Images already preloaded, skipping extraction
	I0917 02:06:21.378310    3951 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:06:21.394015    3951 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:06:21.394037    3951 cache_images.go:84] Images are preloaded, skipping loading
	I0917 02:06:21.394050    3951 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0917 02:06:21.394124    3951 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:06:21.394209    3951 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 02:06:21.429497    3951 cni.go:84] Creating CNI manager for ""
	I0917 02:06:21.429509    3951 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 02:06:21.429523    3951 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 02:06:21.429538    3951 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-857000 NodeName:ha-857000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 02:06:21.429624    3951 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-857000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 02:06:21.429636    3951 kube-vip.go:115] generating kube-vip config ...
	I0917 02:06:21.429694    3951 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 02:06:21.442428    3951 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 02:06:21.442505    3951 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 02:06:21.442559    3951 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:06:21.451375    3951 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:06:21.451431    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 02:06:21.459648    3951 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0917 02:06:21.473122    3951 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:06:21.487014    3951 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0917 02:06:21.500992    3951 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 02:06:21.514562    3951 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:06:21.517444    3951 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:06:21.527518    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:21.625140    3951 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:06:21.639257    3951 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.5
	I0917 02:06:21.639269    3951 certs.go:194] generating shared ca certs ...
	I0917 02:06:21.639280    3951 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:21.639439    3951 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:06:21.639492    3951 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:06:21.639503    3951 certs.go:256] generating profile certs ...
	I0917 02:06:21.639592    3951 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key
	I0917 02:06:21.639611    3951 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea
	I0917 02:06:21.639646    3951 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt.34d356ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0917 02:06:21.706715    3951 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt.34d356ea ...
	I0917 02:06:21.706729    3951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt.34d356ea: {Name:mk3f381e64586a5cdd027dc403cd38b58de19cdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:21.707284    3951 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea ...
	I0917 02:06:21.707298    3951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea: {Name:mk7ad610a632f0df99198e2c9491ed57c1c9afa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:21.707543    3951 certs.go:381] copying /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt.34d356ea -> /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt
	I0917 02:06:21.707724    3951 certs.go:385] copying /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea -> /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key
	I0917 02:06:21.707940    3951 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key
	I0917 02:06:21.707949    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:06:21.707971    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:06:21.707989    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:06:21.708013    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:06:21.708032    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 02:06:21.708050    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 02:06:21.708068    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 02:06:21.708087    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 02:06:21.708175    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:06:21.708221    3951 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:06:21.708229    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:06:21.708259    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:06:21.708290    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:06:21.708317    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:06:21.708378    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:06:21.708413    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:06:21.708433    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:06:21.708450    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:06:21.708936    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:06:21.731634    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:06:21.755014    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:06:21.780416    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:06:21.806686    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 02:06:21.830924    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 02:06:21.860517    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:06:21.883515    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 02:06:21.904458    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:06:21.940425    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:06:21.977748    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:06:22.031404    3951 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 02:06:22.066177    3951 ssh_runner.go:195] Run: openssl version
	I0917 02:06:22.070562    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:06:22.079011    3951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:06:22.082434    3951 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:06:22.082478    3951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:06:22.087894    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:06:22.096140    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:06:22.104466    3951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:06:22.107959    3951 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:06:22.107997    3951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:06:22.112345    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:06:22.120730    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:06:22.129508    3951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:06:22.133071    3951 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:06:22.133113    3951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:06:22.137371    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:06:22.145685    3951 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:06:22.149175    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:06:22.154152    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:06:22.158697    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:06:22.163258    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:06:22.167625    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:06:22.172054    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:06:22.176282    3951 kubeadm.go:392] StartCluster: {Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:06:22.176426    3951 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 02:06:22.189260    3951 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 02:06:22.196889    3951 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 02:06:22.196900    3951 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 02:06:22.196943    3951 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 02:06:22.204529    3951 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:06:22.204834    3951 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-857000" does not appear in /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:06:22.204922    3951 kubeconfig.go:62] /Users/jenkins/minikube-integration/19648-1025/kubeconfig needs updating (will repair): [kubeconfig missing "ha-857000" cluster setting kubeconfig missing "ha-857000" context setting]
	I0917 02:06:22.205131    3951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:22.205534    3951 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:06:22.205732    3951 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3bff720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:06:22.206065    3951 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 02:06:22.206259    3951 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 02:06:22.213498    3951 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0917 02:06:22.213509    3951 kubeadm.go:597] duration metric: took 16.605221ms to restartPrimaryControlPlane
	I0917 02:06:22.213515    3951 kubeadm.go:394] duration metric: took 37.238807ms to StartCluster
	I0917 02:06:22.213523    3951 settings.go:142] acquiring lock: {Name:mk5de70c670a23cf49fdbd19ddb5c1d2a9de9a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:22.213600    3951 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:06:22.213968    3951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:22.214179    3951 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:06:22.214192    3951 start.go:241] waiting for startup goroutines ...
	I0917 02:06:22.214206    3951 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 02:06:22.214324    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:22.256356    3951 out.go:177] * Enabled addons: 
	I0917 02:06:22.277195    3951 addons.go:510] duration metric: took 62.984897ms for enable addons: enabled=[]
	I0917 02:06:22.277282    3951 start.go:246] waiting for cluster config update ...
	I0917 02:06:22.277295    3951 start.go:255] writing updated cluster config ...
	I0917 02:06:22.300377    3951 out.go:201] 
	I0917 02:06:22.321646    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:22.321775    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:22.343932    3951 out.go:177] * Starting "ha-857000-m02" control-plane node in "ha-857000" cluster
	I0917 02:06:22.386310    3951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:06:22.386345    3951 cache.go:56] Caching tarball of preloaded images
	I0917 02:06:22.386520    3951 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:06:22.386539    3951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:06:22.386678    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:22.387665    3951 start.go:360] acquireMachinesLock for ha-857000-m02: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:06:22.387781    3951 start.go:364] duration metric: took 93.188µs to acquireMachinesLock for "ha-857000-m02"
	I0917 02:06:22.387807    3951 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:06:22.387815    3951 fix.go:54] fixHost starting: m02
	I0917 02:06:22.388245    3951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:06:22.388280    3951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:06:22.397656    3951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51937
	I0917 02:06:22.397993    3951 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:06:22.398338    3951 main.go:141] libmachine: Using API Version  1
	I0917 02:06:22.398355    3951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:06:22.398604    3951 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:06:22.398732    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:22.398839    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetState
	I0917 02:06:22.398926    3951 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:22.398995    3951 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 3905
	I0917 02:06:22.399925    3951 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid 3905 missing from process table
	I0917 02:06:22.399987    3951 fix.go:112] recreateIfNeeded on ha-857000-m02: state=Stopped err=<nil>
	I0917 02:06:22.400002    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	W0917 02:06:22.400097    3951 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:06:22.442146    3951 out.go:177] * Restarting existing hyperkit VM for "ha-857000-m02" ...
	I0917 02:06:22.463239    3951 main.go:141] libmachine: (ha-857000-m02) Calling .Start
	I0917 02:06:22.463548    3951 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:22.463605    3951 main.go:141] libmachine: (ha-857000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid
	I0917 02:06:22.465343    3951 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid 3905 missing from process table
	I0917 02:06:22.465354    3951 main.go:141] libmachine: (ha-857000-m02) DBG | pid 3905 is in state "Stopped"
	I0917 02:06:22.465372    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid...
	I0917 02:06:22.465746    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Using UUID 1940a045-0b1b-4b28-842d-d4858a62cbd3
	I0917 02:06:22.495548    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Generated MAC 9a:95:4e:4b:65:fe
	I0917 02:06:22.495583    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:06:22.495857    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1940a045-0b1b-4b28-842d-d4858a62cbd3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000383560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:06:22.495910    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1940a045-0b1b-4b28-842d-d4858a62cbd3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000383560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:06:22.496018    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1940a045-0b1b-4b28-842d-d4858a62cbd3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/ha-857000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machine
s/ha-857000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:06:22.496120    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1940a045-0b1b-4b28-842d-d4858a62cbd3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/ha-857000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:06:22.496143    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:06:22.497973    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: Pid is 3976
	I0917 02:06:22.498454    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Attempt 0
	I0917 02:06:22.498484    3951 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:22.498545    3951 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 3976
	I0917 02:06:22.500282    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Searching for 9a:95:4e:4b:65:fe in /var/db/dhcpd_leases ...
	I0917 02:06:22.500349    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:06:22.500372    3951 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea9805}
	I0917 02:06:22.500382    3951 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94661}
	I0917 02:06:22.500397    3951 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea97be}
	I0917 02:06:22.500410    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Found match: 9a:95:4e:4b:65:fe
	I0917 02:06:22.500437    3951 main.go:141] libmachine: (ha-857000-m02) DBG | IP: 192.169.0.6
	I0917 02:06:22.500486    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetConfigRaw
	I0917 02:06:22.501123    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:06:22.501362    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:22.501877    3951 machine.go:93] provisionDockerMachine start ...
	I0917 02:06:22.501887    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:22.502006    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:22.502140    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:22.502253    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:22.502355    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:22.502453    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:22.502592    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:22.502794    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:22.502804    3951 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:06:22.506011    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:06:22.516718    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:06:22.517536    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:06:22.517559    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:06:22.517587    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:06:22.517605    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:06:22.902525    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:06:22.902540    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:06:23.017245    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:06:23.017263    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:06:23.017272    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:06:23.017286    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:06:23.018137    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:06:23.018146    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:06:28.664665    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:06:28.664731    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:06:28.664739    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:06:28.688834    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:06:33.560885    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:06:33.560902    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:06:33.561080    3951 buildroot.go:166] provisioning hostname "ha-857000-m02"
	I0917 02:06:33.561088    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:06:33.561176    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.561264    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.561361    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.561457    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.561572    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.561724    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.561884    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.561894    3951 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000-m02 && echo "ha-857000-m02" | sudo tee /etc/hostname
	I0917 02:06:33.626435    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000-m02
	
	I0917 02:06:33.626450    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.626583    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.626692    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.626783    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.626875    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.627027    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.627173    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.627184    3951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:06:33.685124    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:06:33.685140    3951 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:06:33.685149    3951 buildroot.go:174] setting up certificates
	I0917 02:06:33.685155    3951 provision.go:84] configureAuth start
	I0917 02:06:33.685161    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:06:33.685285    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:06:33.685391    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.685472    3951 provision.go:143] copyHostCerts
	I0917 02:06:33.685505    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:06:33.685552    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:06:33.685558    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:06:33.685701    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:06:33.686213    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:06:33.686248    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:06:33.686252    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:06:33.686328    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:06:33.686464    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:06:33.686504    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:06:33.686509    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:06:33.686577    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:06:33.686713    3951 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000-m02 san=[127.0.0.1 192.169.0.6 ha-857000-m02 localhost minikube]
	I0917 02:06:33.724325    3951 provision.go:177] copyRemoteCerts
	I0917 02:06:33.724374    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:06:33.724388    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.724531    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.724628    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.724718    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.724808    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:06:33.757977    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:06:33.758053    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 02:06:33.777137    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:06:33.777203    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:06:33.796184    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:06:33.796248    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:06:33.815739    3951 provision.go:87] duration metric: took 130.575095ms to configureAuth
	I0917 02:06:33.815753    3951 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:06:33.815923    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:33.815937    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:33.816066    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.816180    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.816266    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.816357    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.816435    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.816546    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.816672    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.816679    3951 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:06:33.868528    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:06:33.868540    3951 buildroot.go:70] root file system type: tmpfs
	I0917 02:06:33.868626    3951 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:06:33.868638    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.868774    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.868862    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.868957    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.869038    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.869178    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.869313    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.869355    3951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:06:33.934180    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:06:33.934199    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.934331    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.934438    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.934537    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.934624    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.934753    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.934890    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.934902    3951 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:06:35.613474    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:06:35.613490    3951 machine.go:96] duration metric: took 13.111377814s to provisionDockerMachine
	I0917 02:06:35.613498    3951 start.go:293] postStartSetup for "ha-857000-m02" (driver="hyperkit")
	I0917 02:06:35.613517    3951 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:06:35.613531    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.613729    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:06:35.613743    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:35.613853    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.613946    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.614026    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.614114    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:06:35.652452    3951 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:06:35.656174    3951 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:06:35.656186    3951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:06:35.656273    3951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:06:35.656413    3951 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:06:35.656420    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:06:35.656581    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:06:35.665638    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:06:35.696288    3951 start.go:296] duration metric: took 82.770634ms for postStartSetup
	I0917 02:06:35.696319    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.696511    3951 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:06:35.696525    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:35.696625    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.696706    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.696794    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.696893    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:06:35.729642    3951 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:06:35.729708    3951 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:06:35.783199    3951 fix.go:56] duration metric: took 13.395150311s for fixHost
	I0917 02:06:35.783224    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:35.783375    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.783476    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.783551    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.783631    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.783768    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:35.783899    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:35.783906    3951 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:06:35.838274    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726563995.926909320
	
	I0917 02:06:35.838288    3951 fix.go:216] guest clock: 1726563995.926909320
	I0917 02:06:35.838293    3951 fix.go:229] Guest: 2024-09-17 02:06:35.92690932 -0700 PDT Remote: 2024-09-17 02:06:35.783213 -0700 PDT m=+32.178408818 (delta=143.69632ms)
	I0917 02:06:35.838302    3951 fix.go:200] guest clock delta is within tolerance: 143.69632ms
	I0917 02:06:35.838306    3951 start.go:83] releasing machines lock for "ha-857000-m02", held for 13.450280733s
	I0917 02:06:35.838324    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.838459    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:06:35.861800    3951 out.go:177] * Found network options:
	I0917 02:06:35.882860    3951 out.go:177]   - NO_PROXY=192.169.0.5
	W0917 02:06:35.903716    3951 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:06:35.903755    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.904608    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.904879    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.905023    3951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:06:35.905064    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	W0917 02:06:35.905084    3951 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:06:35.905192    3951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:06:35.905211    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:35.905229    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.905436    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.905470    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.905665    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.905679    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.905849    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.905865    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:06:35.905991    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	W0917 02:06:35.936887    3951 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:06:35.936958    3951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:06:36.007933    3951 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:06:36.007953    3951 start.go:495] detecting cgroup driver to use...
	I0917 02:06:36.008056    3951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:06:36.024338    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:06:36.033262    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:06:36.042136    3951 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:06:36.042188    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:06:36.050818    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:06:36.059619    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:06:36.068394    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:06:36.077285    3951 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:06:36.086317    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:06:36.094948    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:06:36.103691    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:06:36.112538    3951 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:06:36.120508    3951 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:06:36.128434    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:36.230022    3951 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:06:36.250428    3951 start.go:495] detecting cgroup driver to use...
	I0917 02:06:36.250505    3951 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:06:36.273190    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:06:36.285496    3951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:06:36.303235    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:06:36.314994    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:06:36.325990    3951 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:06:36.351133    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:06:36.362290    3951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:06:36.377230    3951 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:06:36.380093    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:06:36.387911    3951 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:06:36.401199    3951 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:06:36.507714    3951 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:06:36.609258    3951 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:06:36.609285    3951 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:06:36.623332    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:36.718880    3951 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:07:37.748739    3951 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.028781405s)
	I0917 02:07:37.748815    3951 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0917 02:07:37.786000    3951 out.go:201] 
	W0917 02:07:37.809190    3951 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 09:06:34 ha-857000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 09:06:34 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:34.324120961Z" level=info msg="Starting up"
	Sep 17 09:06:34 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:34.324775253Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 09:06:34 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:34.325518826Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=488
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.341058185Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356213648Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356261078Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356303349Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356313782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356436154Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356475371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356593098Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356628148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356640458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356648167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356767218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356926440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358525862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358564683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358679405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358712925Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358797431Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358843725Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.360911977Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.360974504Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361053471Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361068314Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361078324Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361121426Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361365784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361471567Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361506271Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361517719Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361527110Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361535526Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361543621Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361552701Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361562674Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361570939Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361578985Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361588503Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361603316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361612406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361620269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361628602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361638647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361646859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361654306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361662885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361671295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361681400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361690597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361698250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361705966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361720758Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361737654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361746364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361754112Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361847279Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361861726Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361869503Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361877991Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361885443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361899338Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361911740Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362480967Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362549430Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362632268Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362920029Z" level=info msg="containerd successfully booted in 0.022632s"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.344850604Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.385337180Z" level=info msg="Loading containers: start."
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.568192740Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.627785197Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.670471622Z" level=info msg="Loading containers: done."
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.677239663Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.677408183Z" level=info msg="Daemon has completed initialization"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.699597178Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 09:06:35 ha-857000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.704823863Z" level=info msg="API listen on [::]:2376"
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.821530126Z" level=info msg="Processing signal 'terminated'"
	Sep 17 09:06:36 ha-857000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.822577679Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.823011519Z" level=info msg="Daemon shutdown complete"
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.823037716Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.823053677Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 09:06:37 ha-857000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 09:06:37 ha-857000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 09:06:37 ha-857000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 09:06:37 ha-857000-m02 dockerd[1158]: time="2024-09-17T09:06:37.864990112Z" level=info msg="Starting up"
	Sep 17 09:07:37 ha-857000-m02 dockerd[1158]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 09:07:37 ha-857000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 09:07:37 ha-857000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 09:07:37 ha-857000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0917 02:07:37.809292    3951 out.go:270] * 
	W0917 02:07:37.810458    3951 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:07:37.874286    3951 out.go:201] 
	
	
	==> Docker <==
	Sep 17 09:06:28 ha-857000 dockerd[1182]: time="2024-09-17T09:06:28.953788653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:06:50 ha-857000 dockerd[1176]: time="2024-09-17T09:06:50.570882674Z" level=info msg="ignoring event" container=6dbb2f7111cc445377bba802440bf8e10f56f3e5a0e88f69f41d840619ffa219 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 09:06:50 ha-857000 dockerd[1182]: time="2024-09-17T09:06:50.571701735Z" level=info msg="shim disconnected" id=6dbb2f7111cc445377bba802440bf8e10f56f3e5a0e88f69f41d840619ffa219 namespace=moby
	Sep 17 09:06:50 ha-857000 dockerd[1182]: time="2024-09-17T09:06:50.571758895Z" level=warning msg="cleaning up after shim disconnected" id=6dbb2f7111cc445377bba802440bf8e10f56f3e5a0e88f69f41d840619ffa219 namespace=moby
	Sep 17 09:06:50 ha-857000 dockerd[1182]: time="2024-09-17T09:06:50.571767359Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 09:06:51 ha-857000 dockerd[1176]: time="2024-09-17T09:06:51.580125433Z" level=info msg="ignoring event" container=f1252151601a7acad5aef9c02d49be638390576e96a0ad87836a6b56f813f5c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 09:06:51 ha-857000 dockerd[1182]: time="2024-09-17T09:06:51.581344041Z" level=info msg="shim disconnected" id=f1252151601a7acad5aef9c02d49be638390576e96a0ad87836a6b56f813f5c1 namespace=moby
	Sep 17 09:06:51 ha-857000 dockerd[1182]: time="2024-09-17T09:06:51.581601552Z" level=warning msg="cleaning up after shim disconnected" id=f1252151601a7acad5aef9c02d49be638390576e96a0ad87836a6b56f813f5c1 namespace=moby
	Sep 17 09:06:51 ha-857000 dockerd[1182]: time="2024-09-17T09:06:51.581639267Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.085279461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.085342970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.085355817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.085528340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.087547026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.087599271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.087608710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.087706284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:07:33 ha-857000 dockerd[1176]: time="2024-09-17T09:07:33.582121952Z" level=info msg="ignoring event" container=2d39a363ecf53c7e25c28e0097a1b3c6a6ee70ae1a6443746c0bee850e42b437 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 09:07:33 ha-857000 dockerd[1182]: time="2024-09-17T09:07:33.583058738Z" level=info msg="shim disconnected" id=2d39a363ecf53c7e25c28e0097a1b3c6a6ee70ae1a6443746c0bee850e42b437 namespace=moby
	Sep 17 09:07:33 ha-857000 dockerd[1182]: time="2024-09-17T09:07:33.583223961Z" level=warning msg="cleaning up after shim disconnected" id=2d39a363ecf53c7e25c28e0097a1b3c6a6ee70ae1a6443746c0bee850e42b437 namespace=moby
	Sep 17 09:07:33 ha-857000 dockerd[1182]: time="2024-09-17T09:07:33.583260138Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 09:07:34 ha-857000 dockerd[1176]: time="2024-09-17T09:07:34.599859784Z" level=info msg="ignoring event" container=5043e9bda2acca013e777653865484f1467f9346624b61dd9e0a64afb712c761 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 09:07:34 ha-857000 dockerd[1182]: time="2024-09-17T09:07:34.601045096Z" level=info msg="shim disconnected" id=5043e9bda2acca013e777653865484f1467f9346624b61dd9e0a64afb712c761 namespace=moby
	Sep 17 09:07:34 ha-857000 dockerd[1182]: time="2024-09-17T09:07:34.601095683Z" level=warning msg="cleaning up after shim disconnected" id=5043e9bda2acca013e777653865484f1467f9346624b61dd9e0a64afb712c761 namespace=moby
	Sep 17 09:07:34 ha-857000 dockerd[1182]: time="2024-09-17T09:07:34.601106271Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2d39a363ecf53       6bab7719df100                                                                                         25 seconds ago       Exited              kube-apiserver            2                   d1c62bd0a7eda       kube-apiserver-ha-857000
	5043e9bda2acc       175ffd71cce3d                                                                                         25 seconds ago       Exited              kube-controller-manager   2                   d830cb545033a       kube-controller-manager-ha-857000
	034279696db8f       38af8ddebf499                                                                                         About a minute ago   Running             kube-vip                  0                   4205e70bfa1bb       kube-vip-ha-857000
	d9fae1497b048       9aa1fad941575                                                                                         About a minute ago   Running             kube-scheduler            1                   37d9fe68f2e59       kube-scheduler-ha-857000
	f4f59b8c76404       2e96e5913fc06                                                                                         About a minute ago   Running             etcd                      1                   a23094a650513       etcd-ha-857000
	fe908ac73b00f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago        Exited              busybox                   0                   80864159ef38e       busybox-7dff88458-4jzg8
	521527f17691c       c69fa2e9cbf5f                                                                                         6 minutes ago        Exited              coredns                   0                   aa21641a5b16e       coredns-7c65d6cfc9-nl5j5
	f991c8e956d90       c69fa2e9cbf5f                                                                                         6 minutes ago        Exited              coredns                   0                   da08087b51cd9       coredns-7c65d6cfc9-fg65r
	611759af4bf7a       6e38f40d628db                                                                                         6 minutes ago        Exited              storage-provisioner       0                   08dee0a668f3d       storage-provisioner
	5d84a01abd3e7       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              6 minutes ago        Exited              kindnet-cni               0                   38db6fab73655       kindnet-7pf7v
	0b03e5e488939       60c005f310ff3                                                                                         6 minutes ago        Exited              kube-proxy                0                   067bc1b2ad7fa       kube-proxy-vskbj
	fcb7038a6ac9e       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago        Exited              kube-vip                  0                   b74867bd31c54       kube-vip-ha-857000
	2da1b67c167c6       9aa1fad941575                                                                                         6 minutes ago        Exited              kube-scheduler            0                   f2b2b320ed41a       kube-scheduler-ha-857000
	6989933ec650e       2e96e5913fc06                                                                                         6 minutes ago        Exited              etcd                      0                   43536bf53cbec       etcd-ha-857000
	
	
	==> coredns [521527f17691] <==
	[INFO] 10.244.2.2:33230 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100028s
	[INFO] 10.244.2.2:37727 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077614s
	[INFO] 10.244.2.2:51233 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090375s
	[INFO] 10.244.1.2:43082 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115984s
	[INFO] 10.244.1.2:45048 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000071244s
	[INFO] 10.244.1.2:48877 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106601s
	[INFO] 10.244.1.2:59235 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000068348s
	[INFO] 10.244.1.2:53808 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064222s
	[INFO] 10.244.1.2:54982 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064992s
	[INFO] 10.244.0.4:59177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012236s
	[INFO] 10.244.0.4:59468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096608s
	[INFO] 10.244.0.4:49953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108018s
	[INFO] 10.244.2.2:36658 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081427s
	[INFO] 10.244.1.2:53166 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140458s
	[INFO] 10.244.1.2:60442 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069729s
	[INFO] 10.244.0.4:60564 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007076s
	[INFO] 10.244.0.4:57696 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000125726s
	[INFO] 10.244.2.2:33447 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114855s
	[INFO] 10.244.2.2:49647 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000058138s
	[INFO] 10.244.2.2:55869 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00009725s
	[INFO] 10.244.1.2:49826 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096631s
	[INFO] 10.244.1.2:33376 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000046366s
	[INFO] 10.244.1.2:47184 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000084276s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f991c8e956d9] <==
	[INFO] 10.244.1.2:36169 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206963s
	[INFO] 10.244.1.2:33814 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000088589s
	[INFO] 10.244.1.2:57385 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.000535008s
	[INFO] 10.244.0.4:54856 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135529s
	[INFO] 10.244.0.4:47831 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.019088159s
	[INFO] 10.244.0.4:46325 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201714s
	[INFO] 10.244.0.4:45239 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000255383s
	[INFO] 10.244.0.4:55042 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000141827s
	[INFO] 10.244.2.2:47888 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108401s
	[INFO] 10.244.2.2:41486 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00044994s
	[INFO] 10.244.2.2:50623 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082841s
	[INFO] 10.244.1.2:54143 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098038s
	[INFO] 10.244.1.2:38802 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046632s
	[INFO] 10.244.0.4:39532 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.002579505s
	[INFO] 10.244.2.2:53978 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077749s
	[INFO] 10.244.2.2:60710 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092889s
	[INFO] 10.244.2.2:51255 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000044117s
	[INFO] 10.244.1.2:36996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056219s
	[INFO] 10.244.1.2:39487 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090704s
	[INFO] 10.244.0.4:39831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131192s
	[INFO] 10.244.0.4:35770 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154922s
	[INFO] 10.244.2.2:45820 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113973s
	[INFO] 10.244.1.2:44519 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120184s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0917 09:07:39.282414    2579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 09:07:39.284117    2579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 09:07:39.285588    2579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 09:07:39.286935    2579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 09:07:39.288523    2579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035496] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007916] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.708278] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007008] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.718295] systemd-fstab-generator[126]: Ignoring "noauto" option for root device
	[  +2.225909] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.454376] systemd-fstab-generator[462]: Ignoring "noauto" option for root device
	[  +0.098861] systemd-fstab-generator[474]: Ignoring "noauto" option for root device
	[  +1.963292] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +0.257545] systemd-fstab-generator[1142]: Ignoring "noauto" option for root device
	[  +0.117262] systemd-fstab-generator[1154]: Ignoring "noauto" option for root device
	[  +0.053463] kauditd_printk_skb: 123 callbacks suppressed
	[  +0.056651] systemd-fstab-generator[1168]: Ignoring "noauto" option for root device
	[  +2.442306] systemd-fstab-generator[1383]: Ignoring "noauto" option for root device
	[  +0.098086] systemd-fstab-generator[1395]: Ignoring "noauto" option for root device
	[  +0.113966] systemd-fstab-generator[1407]: Ignoring "noauto" option for root device
	[  +0.114036] systemd-fstab-generator[1422]: Ignoring "noauto" option for root device
	[  +0.434156] systemd-fstab-generator[1582]: Ignoring "noauto" option for root device
	[  +6.997669] kauditd_printk_skb: 190 callbacks suppressed
	[ +21.952863] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [6989933ec650] <==
	2024/09/17 09:05:55 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T09:05:55.944117Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.248286841s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T09:05:55.944127Z","caller":"traceutil/trace.go:171","msg":"trace[182147551] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; }","duration":"5.248299338s","start":"2024-09-17T09:05:50.695825Z","end":"2024-09-17T09:05:55.944124Z","steps":["trace[182147551] 'agreement among raft nodes before linearized reading'  (duration: 5.248286916s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T09:05:55.944136Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T09:05:50.695789Z","time spent":"5.248344269s","remote":"127.0.0.1:52050","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":0,"response size":0,"request content":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true "}
	2024/09/17 09:05:55 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T09:05:55.984755Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T09:05:55.984786Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-17T09:05:55.984817Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-17T09:05:55.987724Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.987747Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.988090Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.988144Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.988200Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.988245Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.988255Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.988259Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.988265Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.988292Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.988663Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.988686Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.988708Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.988717Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.991208Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-17T09:05:55.991249Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-17T09:05:55.991256Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-857000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [f4f59b8c7640] <==
	{"level":"info","ts":"2024-09-17T09:07:36.770332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:36.770499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:36.770523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:36.770542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:36.770677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"warn","ts":"2024-09-17T09:07:37.206509Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:07:37.707446Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275663,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-17T09:07:38.071050Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:38.071075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:38.071083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:38.071092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:38.071097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"warn","ts":"2024-09-17T09:07:38.208614Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:07:38.709238Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:07:39.209474Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:07:39.221977Z","caller":"etcdserver/server.go:2139","msg":"failed to publish local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-857000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-09-17T09:07:39.236049Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ff70cdb626651bff","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:07:39.236062Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ff70cdb626651bff","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:07:39.248522Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-17T09:07:39.248560Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
	{"level":"info","ts":"2024-09-17T09:07:39.370293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:39.370405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:39.370425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:39.370451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:39.370466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	
	
	==> kernel <==
	 09:07:39 up 1 min,  0 users,  load average: 0.18, 0.09, 0.03
	Linux ha-857000 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5d84a01abd3e] <==
	I0917 09:05:22.964948       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:32.966280       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:32.966503       1 main.go:299] handling current node
	I0917 09:05:32.966605       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:32.966739       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:32.966951       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:32.967059       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:32.967333       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:32.967449       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:42.964585       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:42.964999       1 main.go:299] handling current node
	I0917 09:05:42.965252       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:42.965422       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:42.965746       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:42.965829       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:42.966204       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:42.966357       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:52.965279       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:52.965376       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:52.965533       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:52.965592       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:52.965673       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:52.965753       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:52.965812       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:52.965902       1 main.go:299] handling current node
	
	
	==> kube-apiserver [2d39a363ecf5] <==
	I0917 09:07:13.208670       1 options.go:228] external host was not specified, using 192.169.0.5
	I0917 09:07:13.210069       1 server.go:142] Version: v1.31.1
	I0917 09:07:13.210101       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:07:13.559096       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0917 09:07:13.563623       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0917 09:07:13.563675       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0917 09:07:13.563832       1 instance.go:232] Using reconciler: lease
	I0917 09:07:13.564198       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0917 09:07:33.559987       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0917 09:07:33.560076       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0917 09:07:33.564856       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0917 09:07:33.565081       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [5043e9bda2ac] <==
	I0917 09:07:13.602574       1 serving.go:386] Generated self-signed cert in-memory
	I0917 09:07:13.965025       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0917 09:07:13.965059       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:07:13.966263       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0917 09:07:13.966394       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 09:07:13.966278       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 09:07:13.966269       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0917 09:07:34.581719       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-proxy [0b03e5e48893] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 09:00:59.069869       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 09:00:59.079118       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 09:00:59.079199       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 09:00:59.109184       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 09:00:59.109227       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 09:00:59.109245       1 server_linux.go:169] "Using iptables Proxier"
	I0917 09:00:59.111661       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 09:00:59.111847       1 server.go:483] "Version info" version="v1.31.1"
	I0917 09:00:59.111876       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:00:59.112952       1 config.go:199] "Starting service config controller"
	I0917 09:00:59.112979       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 09:00:59.112995       1 config.go:105] "Starting endpoint slice config controller"
	I0917 09:00:59.112998       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 09:00:59.113603       1 config.go:328] "Starting node config controller"
	I0917 09:00:59.113673       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 09:00:59.213587       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 09:00:59.213649       1 shared_informer.go:320] Caches are synced for service config
	I0917 09:00:59.213808       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2da1b67c167c] <==
	E0917 09:03:33.866320       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 887505bd-cf68-4e77-be17-99550df4b4b4(default/busybox-7dff88458-4jzg8) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-4jzg8"
	E0917 09:03:33.866475       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4jzg8\": pod busybox-7dff88458-4jzg8 is already assigned to node \"ha-857000\"" pod="default/busybox-7dff88458-4jzg8"
	I0917 09:03:33.866490       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-4jzg8" node="ha-857000"
	E0917 09:03:33.876570       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5x9l8\": pod busybox-7dff88458-5x9l8 is already assigned to node \"ha-857000-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-5x9l8" node="ha-857000-m03"
	E0917 09:03:33.876627       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod dfc21081-4b44-4f15-9713-8dbd1797a985(default/busybox-7dff88458-5x9l8) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-5x9l8"
	E0917 09:03:33.876641       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5x9l8\": pod busybox-7dff88458-5x9l8 is already assigned to node \"ha-857000-m03\"" pod="default/busybox-7dff88458-5x9l8"
	I0917 09:03:33.876653       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-5x9l8" node="ha-857000-m03"
	E0917 09:04:05.799466       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-zchkt\": pod kube-proxy-zchkt is already assigned to node \"ha-857000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-zchkt" node="ha-857000-m04"
	E0917 09:04:05.799587       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c03ed58b-9571-4d9e-bb6b-c12332f7766a(kube-system/kube-proxy-zchkt) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-zchkt"
	E0917 09:04:05.799651       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-zchkt\": pod kube-proxy-zchkt is already assigned to node \"ha-857000-m04\"" pod="kube-system/kube-proxy-zchkt"
	I0917 09:04:05.799843       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-zchkt" node="ha-857000-m04"
	E0917 09:04:05.810597       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4jk9v\": pod kindnet-4jk9v is already assigned to node \"ha-857000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4jk9v" node="ha-857000-m04"
	E0917 09:04:05.810752       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 24a018c6-9cbb-4d17-a295-8fef456534a0(kube-system/kindnet-4jk9v) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4jk9v"
	E0917 09:04:05.811044       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4jk9v\": pod kindnet-4jk9v is already assigned to node \"ha-857000-m04\"" pod="kube-system/kindnet-4jk9v"
	I0917 09:04:05.811236       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4jk9v" node="ha-857000-m04"
	E0917 09:04:05.816361       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-q5f2s\": pod kindnet-q5f2s is already assigned to node \"ha-857000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-q5f2s" node="ha-857000-m04"
	E0917 09:04:05.816486       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-q5f2s\": pod kindnet-q5f2s is already assigned to node \"ha-857000-m04\"" pod="kube-system/kindnet-q5f2s"
	E0917 09:04:05.829276       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tbbh2\": pod kindnet-tbbh2 is already assigned to node \"ha-857000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-tbbh2" node="ha-857000-m04"
	E0917 09:04:05.829463       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a3360b43-cfb5-45f5-9de3-cb8bfd82ac14(kube-system/kindnet-tbbh2) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-tbbh2"
	E0917 09:04:05.829578       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tbbh2\": pod kindnet-tbbh2 is already assigned to node \"ha-857000-m04\"" pod="kube-system/kindnet-tbbh2"
	I0917 09:04:05.829611       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-tbbh2" node="ha-857000-m04"
	I0917 09:05:55.853932       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 09:05:55.858618       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0917 09:05:55.858815       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 09:05:55.881585       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d9fae1497b04] <==
	E0917 09:07:31.901651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 09:07:32.071397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 09:07:32.071649       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 09:07:34.580116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33886->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.580206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33886->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.580455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42496->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.580535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42496->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.580863       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33904->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.580996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33904->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.581303       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33898->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.581360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33898->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.581744       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33862->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.582050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33862->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.582256       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33854->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.582356       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33854->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.582989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42470->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.583036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42470->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.583221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42488->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.583273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42488->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.583499       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33884->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.583538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33884->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.583720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42464->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.583760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42464->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.583992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42512->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.584033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42512->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 17 09:07:22 ha-857000 kubelet[1589]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 09:07:22 ha-857000 kubelet[1589]: E0917 09:07:22.089827    1589 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-857000\" not found"
	Sep 17 09:07:23 ha-857000 kubelet[1589]: E0917 09:07:23.404075    1589 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-857000.17f5fccb3d09c90c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-857000,UID:ha-857000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-857000,},FirstTimestamp:2024-09-17 09:06:21.999065356 +0000 UTC m=+0.224912397,LastTimestamp:2024-09-17 09:06:21.999065356 +0000 UTC m=+0.224912397,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-857000,}"
	Sep 17 09:07:27 ha-857000 kubelet[1589]: I0917 09:07:27.335343    1589 kubelet_node_status.go:72] "Attempting to register node" node="ha-857000"
	Sep 17 09:07:29 ha-857000 kubelet[1589]: E0917 09:07:29.548070    1589 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-857000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 17 09:07:29 ha-857000 kubelet[1589]: E0917 09:07:29.548681    1589 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-857000"
	Sep 17 09:07:32 ha-857000 kubelet[1589]: E0917 09:07:32.090152    1589 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-857000\" not found"
	Sep 17 09:07:33 ha-857000 kubelet[1589]: I0917 09:07:33.850461    1589 scope.go:117] "RemoveContainer" containerID="6dbb2f7111cc445377bba802440bf8e10f56f3e5a0e88f69f41d840619ffa219"
	Sep 17 09:07:33 ha-857000 kubelet[1589]: I0917 09:07:33.851330    1589 scope.go:117] "RemoveContainer" containerID="2d39a363ecf53c7e25c28e0097a1b3c6a6ee70ae1a6443746c0bee850e42b437"
	Sep 17 09:07:33 ha-857000 kubelet[1589]: E0917 09:07:33.851432    1589 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-857000_kube-system(f9e4594843635b7ed6662a3474d619e9)\"" pod="kube-system/kube-apiserver-ha-857000" podUID="f9e4594843635b7ed6662a3474d619e9"
	Sep 17 09:07:34 ha-857000 kubelet[1589]: I0917 09:07:34.867010    1589 scope.go:117] "RemoveContainer" containerID="f1252151601a7acad5aef9c02d49be638390576e96a0ad87836a6b56f813f5c1"
	Sep 17 09:07:34 ha-857000 kubelet[1589]: I0917 09:07:34.868149    1589 scope.go:117] "RemoveContainer" containerID="5043e9bda2acca013e777653865484f1467f9346624b61dd9e0a64afb712c761"
	Sep 17 09:07:34 ha-857000 kubelet[1589]: E0917 09:07:34.868227    1589 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857000_kube-system(2e359dfc5a9c04b45f1f4ad5b0c126ca)\"" pod="kube-system/kube-controller-manager-ha-857000" podUID="2e359dfc5a9c04b45f1f4ad5b0c126ca"
	Sep 17 09:07:35 ha-857000 kubelet[1589]: E0917 09:07:35.692107    1589 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-857000.17f5fccb3d09c90c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-857000,UID:ha-857000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-857000,},FirstTimestamp:2024-09-17 09:06:21.999065356 +0000 UTC m=+0.224912397,LastTimestamp:2024-09-17 09:06:21.999065356 +0000 UTC m=+0.224912397,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-857000,}"
	Sep 17 09:07:36 ha-857000 kubelet[1589]: I0917 09:07:36.557057    1589 kubelet_node_status.go:72] "Attempting to register node" node="ha-857000"
	Sep 17 09:07:37 ha-857000 kubelet[1589]: I0917 09:07:37.880843    1589 scope.go:117] "RemoveContainer" containerID="5043e9bda2acca013e777653865484f1467f9346624b61dd9e0a64afb712c761"
	Sep 17 09:07:37 ha-857000 kubelet[1589]: E0917 09:07:37.881410    1589 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857000_kube-system(2e359dfc5a9c04b45f1f4ad5b0c126ca)\"" pod="kube-system/kube-controller-manager-ha-857000" podUID="2e359dfc5a9c04b45f1f4ad5b0c126ca"
	Sep 17 09:07:38 ha-857000 kubelet[1589]: E0917 09:07:38.763996    1589 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-857000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 17 09:07:38 ha-857000 kubelet[1589]: W0917 09:07:38.764000    1589 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 17 09:07:38 ha-857000 kubelet[1589]: E0917 09:07:38.764047    1589 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-857000"
	Sep 17 09:07:38 ha-857000 kubelet[1589]: E0917 09:07:38.764044    1589 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 17 09:07:38 ha-857000 kubelet[1589]: I0917 09:07:38.848033    1589 scope.go:117] "RemoveContainer" containerID="2d39a363ecf53c7e25c28e0097a1b3c6a6ee70ae1a6443746c0bee850e42b437"
	Sep 17 09:07:38 ha-857000 kubelet[1589]: E0917 09:07:38.848255    1589 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-857000_kube-system(f9e4594843635b7ed6662a3474d619e9)\"" pod="kube-system/kube-apiserver-ha-857000" podUID="f9e4594843635b7ed6662a3474d619e9"
	Sep 17 09:07:40 ha-857000 kubelet[1589]: I0917 09:07:40.089264    1589 scope.go:117] "RemoveContainer" containerID="5043e9bda2acca013e777653865484f1467f9346624b61dd9e0a64afb712c761"
	Sep 17 09:07:40 ha-857000 kubelet[1589]: E0917 09:07:40.089464    1589 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857000_kube-system(2e359dfc5a9c04b45f1f4ad5b0c126ca)\"" pod="kube-system/kube-controller-manager-ha-857000" podUID="2e359dfc5a9c04b45f1f4ad5b0c126ca"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-857000 -n ha-857000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-857000 -n ha-857000: exit status 2 (147.861083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-857000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (124.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (2.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-857000 node delete m03 -v=7 --alsologtostderr: exit status 83 (174.811526ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-857000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-857000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:07:40.605447    4013 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:07:40.605757    4013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:07:40.605763    4013 out.go:358] Setting ErrFile to fd 2...
	I0917 02:07:40.605766    4013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:07:40.605956    4013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:07:40.606342    4013 mustload.go:65] Loading cluster: ha-857000
	I0917 02:07:40.606697    4013 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:07:40.607076    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:07:40.607120    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:07:40.615516    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51980
	I0917 02:07:40.615914    4013 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:07:40.616307    4013 main.go:141] libmachine: Using API Version  1
	I0917 02:07:40.616341    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:07:40.616576    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:07:40.616693    4013 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:07:40.616781    4013 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:07:40.616840    4013 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 3964
	I0917 02:07:40.617813    4013 host.go:66] Checking if "ha-857000" exists ...
	I0917 02:07:40.618072    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:07:40.618094    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:07:40.626339    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51982
	I0917 02:07:40.626682    4013 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:07:40.627050    4013 main.go:141] libmachine: Using API Version  1
	I0917 02:07:40.627066    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:07:40.627300    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:07:40.627424    4013 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:07:40.627806    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:07:40.627828    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:07:40.636079    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51984
	I0917 02:07:40.636415    4013 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:07:40.636745    4013 main.go:141] libmachine: Using API Version  1
	I0917 02:07:40.636757    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:07:40.636965    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:07:40.637074    4013 main.go:141] libmachine: (ha-857000-m02) Calling .GetState
	I0917 02:07:40.637162    4013 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:07:40.637247    4013 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 3976
	I0917 02:07:40.638199    4013 host.go:66] Checking if "ha-857000-m02" exists ...
	I0917 02:07:40.638450    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:07:40.638474    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:07:40.646734    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51986
	I0917 02:07:40.647119    4013 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:07:40.647443    4013 main.go:141] libmachine: Using API Version  1
	I0917 02:07:40.647453    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:07:40.647666    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:07:40.647772    4013 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:07:40.648176    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:07:40.648221    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:07:40.656479    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51988
	I0917 02:07:40.656852    4013 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:07:40.657199    4013 main.go:141] libmachine: Using API Version  1
	I0917 02:07:40.657215    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:07:40.657449    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:07:40.657575    4013 main.go:141] libmachine: (ha-857000-m03) Calling .GetState
	I0917 02:07:40.657664    4013 main.go:141] libmachine: (ha-857000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:07:40.657739    4013 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid from json: 3442
	I0917 02:07:40.658666    4013 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid 3442 missing from process table
	I0917 02:07:40.681225    4013 out.go:177] * The control-plane node ha-857000-m03 host is not running: state=Stopped
	I0917 02:07:40.702963    4013 out.go:177]   To start a cluster, run: "minikube start -p ha-857000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-amd64 -p ha-857000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr: exit status 7 (250.115366ms)

                                                
                                                
-- stdout --
	ha-857000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-857000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-857000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-857000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:07:40.780970    4020 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:07:40.781156    4020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:07:40.781161    4020 out.go:358] Setting ErrFile to fd 2...
	I0917 02:07:40.781165    4020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:07:40.781349    4020 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:07:40.781530    4020 out.go:352] Setting JSON to false
	I0917 02:07:40.781553    4020 mustload.go:65] Loading cluster: ha-857000
	I0917 02:07:40.781592    4020 notify.go:220] Checking for updates...
	I0917 02:07:40.781939    4020 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:07:40.781952    4020 status.go:255] checking status of ha-857000 ...
	I0917 02:07:40.782359    4020 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:07:40.782433    4020 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:07:40.791399    4020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51991
	I0917 02:07:40.791748    4020 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:07:40.792240    4020 main.go:141] libmachine: Using API Version  1
	I0917 02:07:40.792255    4020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:07:40.792468    4020 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:07:40.792592    4020 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:07:40.792675    4020 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:07:40.792745    4020 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 3964
	I0917 02:07:40.793705    4020 status.go:330] ha-857000 host status = "Running" (err=<nil>)
	I0917 02:07:40.793725    4020 host.go:66] Checking if "ha-857000" exists ...
	I0917 02:07:40.793971    4020 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:07:40.793991    4020 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:07:40.802292    4020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51993
	I0917 02:07:40.802627    4020 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:07:40.802998    4020 main.go:141] libmachine: Using API Version  1
	I0917 02:07:40.803014    4020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:07:40.803261    4020 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:07:40.803382    4020 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:07:40.803474    4020 host.go:66] Checking if "ha-857000" exists ...
	I0917 02:07:40.803729    4020 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:07:40.803757    4020 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:07:40.812123    4020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51995
	I0917 02:07:40.812443    4020 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:07:40.812767    4020 main.go:141] libmachine: Using API Version  1
	I0917 02:07:40.812783    4020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:07:40.813012    4020 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:07:40.813119    4020 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:07:40.813255    4020 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 02:07:40.813274    4020 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:07:40.813354    4020 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:07:40.813427    4020 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:07:40.813509    4020 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:07:40.813586    4020 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:07:40.844103    4020 ssh_runner.go:195] Run: systemctl --version
	I0917 02:07:40.848335    4020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:07:40.859992    4020 kubeconfig.go:125] found "ha-857000" server: "https://192.169.0.254:8443"
	I0917 02:07:40.860015    4020 api_server.go:166] Checking apiserver status ...
	I0917 02:07:40.860062    4020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0917 02:07:40.870786    4020 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:07:40.870802    4020 status.go:422] ha-857000 apiserver status = Running (err=<nil>)
	I0917 02:07:40.870810    4020 status.go:257] ha-857000 status: &{Name:ha-857000 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 02:07:40.870823    4020 status.go:255] checking status of ha-857000-m02 ...
	I0917 02:07:40.871089    4020 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:07:40.871112    4020 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:07:40.879589    4020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51998
	I0917 02:07:40.879936    4020 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:07:40.880268    4020 main.go:141] libmachine: Using API Version  1
	I0917 02:07:40.880280    4020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:07:40.880497    4020 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:07:40.880610    4020 main.go:141] libmachine: (ha-857000-m02) Calling .GetState
	I0917 02:07:40.880693    4020 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:07:40.880758    4020 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 3976
	I0917 02:07:40.881731    4020 status.go:330] ha-857000-m02 host status = "Running" (err=<nil>)
	I0917 02:07:40.881740    4020 host.go:66] Checking if "ha-857000-m02" exists ...
	I0917 02:07:40.882009    4020 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:07:40.882032    4020 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:07:40.890412    4020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52000
	I0917 02:07:40.890727    4020 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:07:40.891075    4020 main.go:141] libmachine: Using API Version  1
	I0917 02:07:40.891090    4020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:07:40.891297    4020 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:07:40.891435    4020 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:07:40.891520    4020 host.go:66] Checking if "ha-857000-m02" exists ...
	I0917 02:07:40.891791    4020 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:07:40.891817    4020 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:07:40.900157    4020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52002
	I0917 02:07:40.900522    4020 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:07:40.900867    4020 main.go:141] libmachine: Using API Version  1
	I0917 02:07:40.900884    4020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:07:40.901108    4020 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:07:40.901224    4020 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:07:40.901359    4020 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 02:07:40.901371    4020 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:07:40.901459    4020 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:07:40.901547    4020 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:07:40.901657    4020 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:07:40.901745    4020 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:07:40.931514    4020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:07:40.942273    4020 kubeconfig.go:125] found "ha-857000" server: "https://192.169.0.254:8443"
	I0917 02:07:40.942287    4020 api_server.go:166] Checking apiserver status ...
	I0917 02:07:40.942335    4020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0917 02:07:40.952106    4020 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:07:40.952122    4020 status.go:422] ha-857000-m02 apiserver status = Stopped (err=<nil>)
	I0917 02:07:40.952128    4020 status.go:257] ha-857000-m02 status: &{Name:ha-857000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 02:07:40.952139    4020 status.go:255] checking status of ha-857000-m03 ...
	I0917 02:07:40.952419    4020 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:07:40.952442    4020 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:07:40.961257    4020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52005
	I0917 02:07:40.961624    4020 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:07:40.961994    4020 main.go:141] libmachine: Using API Version  1
	I0917 02:07:40.962009    4020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:07:40.962209    4020 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:07:40.962337    4020 main.go:141] libmachine: (ha-857000-m03) Calling .GetState
	I0917 02:07:40.962422    4020 main.go:141] libmachine: (ha-857000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:07:40.962492    4020 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid from json: 3442
	I0917 02:07:40.963451    4020 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid 3442 missing from process table
	I0917 02:07:40.963473    4020 status.go:330] ha-857000-m03 host status = "Stopped" (err=<nil>)
	I0917 02:07:40.963481    4020 status.go:343] host is not running, skipping remaining checks
	I0917 02:07:40.963487    4020 status.go:257] ha-857000-m03 status: &{Name:ha-857000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 02:07:40.963498    4020 status.go:255] checking status of ha-857000-m04 ...
	I0917 02:07:40.963770    4020 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:07:40.963792    4020 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:07:40.972115    4020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52007
	I0917 02:07:40.972454    4020 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:07:40.972820    4020 main.go:141] libmachine: Using API Version  1
	I0917 02:07:40.972836    4020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:07:40.973058    4020 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:07:40.973179    4020 main.go:141] libmachine: (ha-857000-m04) Calling .GetState
	I0917 02:07:40.973278    4020 main.go:141] libmachine: (ha-857000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:07:40.973349    4020 main.go:141] libmachine: (ha-857000-m04) DBG | hyperkit pid from json: 3550
	I0917 02:07:40.974299    4020 main.go:141] libmachine: (ha-857000-m04) DBG | hyperkit pid 3550 missing from process table
	I0917 02:07:40.974324    4020 status.go:330] ha-857000-m04 host status = "Stopped" (err=<nil>)
	I0917 02:07:40.974330    4020 status.go:343] host is not running, skipping remaining checks
	I0917 02:07:40.974336    4020 status.go:257] ha-857000-m04 status: &{Name:ha-857000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-857000 -n ha-857000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-857000 -n ha-857000: exit status 2 (149.742349ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-857000 logs -n 25: (2.075477164s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m02 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m03_ha-857000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m03:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04:/home/docker/cp-test_ha-857000-m03_ha-857000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m04 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m03_ha-857000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-857000 cp testdata/cp-test.txt                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3490570692/001/cp-test_ha-857000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000:/home/docker/cp-test_ha-857000-m04_ha-857000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000 sudo cat                                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m02:/home/docker/cp-test_ha-857000-m04_ha-857000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m02 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m03:/home/docker/cp-test_ha-857000-m04_ha-857000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m03 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-857000 node stop m02 -v=7                                                                                                 | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-857000 node start m02 -v=7                                                                                                | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:05 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-857000 -v=7                                                                                                       | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:05 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-857000 -v=7                                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:05 PDT | 17 Sep 24 02:06 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-857000 --wait=true -v=7                                                                                                | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:06 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-857000                                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:07 PDT |                     |
	| node    | ha-857000 node delete m03 -v=7                                                                                               | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:07 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 02:06:03
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 02:06:03.640574    3951 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:06:03.641305    3951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:06:03.641314    3951 out.go:358] Setting ErrFile to fd 2...
	I0917 02:06:03.641320    3951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:06:03.641922    3951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:06:03.643438    3951 out.go:352] Setting JSON to false
	I0917 02:06:03.667323    3951 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2133,"bootTime":1726561830,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 02:06:03.667643    3951 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:06:03.689297    3951 out.go:177] * [ha-857000] minikube v1.34.0 on Darwin 14.6.1
	I0917 02:06:03.731193    3951 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:06:03.731279    3951 notify.go:220] Checking for updates...
	I0917 02:06:03.773863    3951 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:06:03.794994    3951 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 02:06:03.815992    3951 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:06:03.837103    3951 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:06:03.858226    3951 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:06:03.879788    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:03.879962    3951 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:06:03.880706    3951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:06:03.880768    3951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:06:03.890269    3951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51913
	I0917 02:06:03.890631    3951 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:06:03.891014    3951 main.go:141] libmachine: Using API Version  1
	I0917 02:06:03.891039    3951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:06:03.891290    3951 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:06:03.891417    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:03.920139    3951 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 02:06:03.941013    3951 start.go:297] selected driver: hyperkit
	I0917 02:06:03.941066    3951 start.go:901] validating driver "hyperkit" against &{Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:06:03.941369    3951 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:06:03.941551    3951 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:06:03.941770    3951 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19648-1025/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 02:06:03.951375    3951 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 02:06:03.956115    3951 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:06:03.956133    3951 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 02:06:03.959464    3951 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:06:03.959502    3951 cni.go:84] Creating CNI manager for ""
	I0917 02:06:03.959545    3951 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 02:06:03.959620    3951 start.go:340] cluster config:
	{Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:06:03.959742    3951 iso.go:125] acquiring lock: {Name:mkc407d80a0f8e78f0e24d63c464bb315c80139b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:06:04.002033    3951 out.go:177] * Starting "ha-857000" primary control-plane node in "ha-857000" cluster
	I0917 02:06:04.022857    3951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:06:04.022895    3951 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 02:06:04.022909    3951 cache.go:56] Caching tarball of preloaded images
	I0917 02:06:04.023022    3951 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:06:04.023030    3951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:06:04.023135    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:04.023618    3951 start.go:360] acquireMachinesLock for ha-857000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:06:04.023673    3951 start.go:364] duration metric: took 42.184µs to acquireMachinesLock for "ha-857000"
	I0917 02:06:04.023691    3951 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:06:04.023701    3951 fix.go:54] fixHost starting: 
	I0917 02:06:04.023937    3951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:06:04.023964    3951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:06:04.032560    3951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51915
	I0917 02:06:04.032902    3951 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:06:04.033222    3951 main.go:141] libmachine: Using API Version  1
	I0917 02:06:04.033234    3951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:06:04.033482    3951 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:06:04.033595    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:04.033680    3951 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:06:04.033773    3951 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:04.033830    3951 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 3402
	I0917 02:06:04.034740    3951 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid 3402 missing from process table
	I0917 02:06:04.034780    3951 fix.go:112] recreateIfNeeded on ha-857000: state=Stopped err=<nil>
	I0917 02:06:04.034806    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	W0917 02:06:04.034888    3951 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:06:04.077159    3951 out.go:177] * Restarting existing hyperkit VM for "ha-857000" ...
	I0917 02:06:04.097853    3951 main.go:141] libmachine: (ha-857000) Calling .Start
	I0917 02:06:04.098040    3951 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:04.098062    3951 main.go:141] libmachine: (ha-857000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid
	I0917 02:06:04.099681    3951 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid 3402 missing from process table
	I0917 02:06:04.099693    3951 main.go:141] libmachine: (ha-857000) DBG | pid 3402 is in state "Stopped"
	I0917 02:06:04.099713    3951 main.go:141] libmachine: (ha-857000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid...
	I0917 02:06:04.100071    3951 main.go:141] libmachine: (ha-857000) DBG | Using UUID f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c
	I0917 02:06:04.220854    3951 main.go:141] libmachine: (ha-857000) DBG | Generated MAC c2:63:2b:63:80:76
	I0917 02:06:04.220886    3951 main.go:141] libmachine: (ha-857000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:06:04.221000    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6870)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:06:04.221030    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6870)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:06:04.221075    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/ha-857000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:06:04.221122    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/ha-857000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:06:04.221130    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:06:04.222561    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: Pid is 3964
	I0917 02:06:04.222927    3951 main.go:141] libmachine: (ha-857000) DBG | Attempt 0
	I0917 02:06:04.222940    3951 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:04.222982    3951 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 3964
	I0917 02:06:04.224835    3951 main.go:141] libmachine: (ha-857000) DBG | Searching for c2:63:2b:63:80:76 in /var/db/dhcpd_leases ...
	I0917 02:06:04.224889    3951 main.go:141] libmachine: (ha-857000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:06:04.224918    3951 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94661}
	I0917 02:06:04.224931    3951 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea97be}
	I0917 02:06:04.224951    3951 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9724}
	I0917 02:06:04.224959    3951 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea96ad}
	I0917 02:06:04.224964    3951 main.go:141] libmachine: (ha-857000) DBG | Found match: c2:63:2b:63:80:76
	I0917 02:06:04.224968    3951 main.go:141] libmachine: (ha-857000) DBG | IP: 192.169.0.5
	I0917 02:06:04.225012    3951 main.go:141] libmachine: (ha-857000) Calling .GetConfigRaw
	I0917 02:06:04.225649    3951 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:06:04.225875    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:04.226292    3951 machine.go:93] provisionDockerMachine start ...
	I0917 02:06:04.226303    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:04.226417    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:04.226547    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:04.226663    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:04.226797    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:04.226907    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:04.227062    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:04.227266    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:04.227274    3951 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:06:04.230562    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:06:04.281228    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:06:04.281906    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:06:04.281925    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:06:04.281932    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:06:04.281939    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:06:04.662879    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:06:04.662893    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:06:04.777528    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:06:04.777548    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:06:04.777560    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:06:04.777595    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:06:04.778494    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:06:04.778504    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:06:10.382594    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:10 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 02:06:10.382613    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:10 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 02:06:10.382641    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:10 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 02:06:10.407226    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:10 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 02:06:15.292530    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:06:15.292580    3951 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:06:15.292726    3951 buildroot.go:166] provisioning hostname "ha-857000"
	I0917 02:06:15.292736    3951 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:06:15.292849    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.293003    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.293094    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.293188    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.293326    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.293545    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.293705    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.293713    3951 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000 && echo "ha-857000" | sudo tee /etc/hostname
	I0917 02:06:15.366591    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000
	
	I0917 02:06:15.366612    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.366751    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.366847    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.366940    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.367034    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.367186    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.367320    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.367331    3951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:06:15.430651    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:06:15.430671    3951 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:06:15.430688    3951 buildroot.go:174] setting up certificates
	I0917 02:06:15.430697    3951 provision.go:84] configureAuth start
	I0917 02:06:15.430705    3951 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:06:15.430833    3951 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:06:15.430948    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.431043    3951 provision.go:143] copyHostCerts
	I0917 02:06:15.431073    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:06:15.431127    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:06:15.431135    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:06:15.431279    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:06:15.431473    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:06:15.431502    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:06:15.431506    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:06:15.431572    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:06:15.431702    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:06:15.431739    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:06:15.431744    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:06:15.431808    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:06:15.431954    3951 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000 san=[127.0.0.1 192.169.0.5 ha-857000 localhost minikube]
	I0917 02:06:15.502156    3951 provision.go:177] copyRemoteCerts
	I0917 02:06:15.502214    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:06:15.502227    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.502353    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.502455    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.502537    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.502627    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:15.536073    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:06:15.536152    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:06:15.555893    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:06:15.555952    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 02:06:15.576096    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:06:15.576155    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 02:06:15.595956    3951 provision.go:87] duration metric: took 165.243542ms to configureAuth
	I0917 02:06:15.595981    3951 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:06:15.596163    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:15.596186    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:15.596327    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.596414    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.596502    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.596587    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.596672    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.596795    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.596928    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.596935    3951 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:06:15.651820    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:06:15.651831    3951 buildroot.go:70] root file system type: tmpfs
	I0917 02:06:15.651920    3951 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:06:15.651934    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.652065    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.652168    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.652259    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.652347    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.652479    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.652616    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.652659    3951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:06:15.717812    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:06:15.717834    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.717968    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.718062    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.718155    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.718250    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.718387    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.718524    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.718536    3951 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:06:17.394959    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:06:17.394973    3951 machine.go:96] duration metric: took 13.168443896s to provisionDockerMachine
	I0917 02:06:17.394995    3951 start.go:293] postStartSetup for "ha-857000" (driver="hyperkit")
	I0917 02:06:17.395004    3951 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:06:17.395018    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.395227    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:06:17.395243    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.395347    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.395465    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.395565    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.395656    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:17.438838    3951 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:06:17.443638    3951 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:06:17.443658    3951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:06:17.443750    3951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:06:17.443904    3951 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:06:17.443911    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:06:17.444089    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:06:17.451612    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:06:17.483402    3951 start.go:296] duration metric: took 88.39524ms for postStartSetup
	I0917 02:06:17.483429    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.483612    3951 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:06:17.483623    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.483710    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.483808    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.483897    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.483966    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:17.517140    3951 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:06:17.517209    3951 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:06:17.552751    3951 fix.go:56] duration metric: took 13.528816727s for fixHost
	I0917 02:06:17.552773    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.552913    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.553026    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.553112    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.553196    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.553326    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:17.553466    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:17.553473    3951 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:06:17.609371    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726563977.697638270
	
	I0917 02:06:17.609383    3951 fix.go:216] guest clock: 1726563977.697638270
	I0917 02:06:17.609388    3951 fix.go:229] Guest: 2024-09-17 02:06:17.69763827 -0700 PDT Remote: 2024-09-17 02:06:17.552764 -0700 PDT m=+13.948274598 (delta=144.87427ms)
	I0917 02:06:17.609406    3951 fix.go:200] guest clock delta is within tolerance: 144.87427ms
	I0917 02:06:17.609410    3951 start.go:83] releasing machines lock for "ha-857000", held for 13.585495629s
	I0917 02:06:17.609431    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.609563    3951 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:06:17.609665    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.609955    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.610053    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.610139    3951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:06:17.610168    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.610194    3951 ssh_runner.go:195] Run: cat /version.json
	I0917 02:06:17.610206    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.610247    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.610275    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.610357    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.610376    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.610500    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.610520    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.610600    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:17.610622    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:17.697769    3951 ssh_runner.go:195] Run: systemctl --version
	I0917 02:06:17.702709    3951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 02:06:17.706848    3951 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:06:17.706892    3951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:06:17.718886    3951 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:06:17.718900    3951 start.go:495] detecting cgroup driver to use...
	I0917 02:06:17.719004    3951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:06:17.737294    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:06:17.746145    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:06:17.754878    3951 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:06:17.754923    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:06:17.763740    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:06:17.772496    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:06:17.781224    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:06:17.790031    3951 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:06:17.799078    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:06:17.808154    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:06:17.817191    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:06:17.826325    3951 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:06:17.834538    3951 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:06:17.842770    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:17.944652    3951 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:06:17.962631    3951 start.go:495] detecting cgroup driver to use...
	I0917 02:06:17.962719    3951 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:06:17.974517    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:06:17.987421    3951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:06:18.001906    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:06:18.013186    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:06:18.024102    3951 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:06:18.045444    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:06:18.058849    3951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:06:18.073851    3951 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:06:18.076885    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:06:18.084040    3951 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:06:18.097595    3951 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:06:18.193717    3951 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:06:18.309886    3951 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:06:18.309951    3951 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:06:18.324367    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:18.418680    3951 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:06:20.733359    3951 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.314622452s)
	I0917 02:06:20.733433    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:06:20.744031    3951 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:06:20.756945    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:06:20.767405    3951 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:06:20.860682    3951 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:06:20.962907    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:21.070080    3951 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:06:21.083874    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:06:21.094971    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:21.190975    3951 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:06:21.258446    3951 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:06:21.258552    3951 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:06:21.262963    3951 start.go:563] Will wait 60s for crictl version
	I0917 02:06:21.263020    3951 ssh_runner.go:195] Run: which crictl
	I0917 02:06:21.266695    3951 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:06:21.293648    3951 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:06:21.293750    3951 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:06:21.309528    3951 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:06:21.349115    3951 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:06:21.349164    3951 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:06:21.349574    3951 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:06:21.354153    3951 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:06:21.363705    3951 kubeadm.go:883] updating cluster {Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 02:06:21.363793    3951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:06:21.363866    3951 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:06:21.378216    3951 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:06:21.378227    3951 docker.go:615] Images already preloaded, skipping extraction
	I0917 02:06:21.378310    3951 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:06:21.394015    3951 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:06:21.394037    3951 cache_images.go:84] Images are preloaded, skipping loading
	I0917 02:06:21.394050    3951 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0917 02:06:21.394124    3951 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:06:21.394209    3951 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 02:06:21.429497    3951 cni.go:84] Creating CNI manager for ""
	I0917 02:06:21.429509    3951 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 02:06:21.429523    3951 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 02:06:21.429538    3951 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-857000 NodeName:ha-857000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 02:06:21.429624    3951 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-857000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 02:06:21.429636    3951 kube-vip.go:115] generating kube-vip config ...
	I0917 02:06:21.429694    3951 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 02:06:21.442428    3951 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 02:06:21.442505    3951 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 02:06:21.442559    3951 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:06:21.451375    3951 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:06:21.451431    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 02:06:21.459648    3951 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0917 02:06:21.473122    3951 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:06:21.487014    3951 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0917 02:06:21.500992    3951 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 02:06:21.514562    3951 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:06:21.517444    3951 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:06:21.527518    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:21.625140    3951 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:06:21.639257    3951 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.5
	I0917 02:06:21.639269    3951 certs.go:194] generating shared ca certs ...
	I0917 02:06:21.639280    3951 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:21.639439    3951 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:06:21.639492    3951 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:06:21.639503    3951 certs.go:256] generating profile certs ...
	I0917 02:06:21.639592    3951 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key
	I0917 02:06:21.639611    3951 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea
	I0917 02:06:21.639646    3951 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt.34d356ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0917 02:06:21.706715    3951 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt.34d356ea ...
	I0917 02:06:21.706729    3951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt.34d356ea: {Name:mk3f381e64586a5cdd027dc403cd38b58de19cdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:21.707284    3951 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea ...
	I0917 02:06:21.707298    3951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea: {Name:mk7ad610a632f0df99198e2c9491ed57c1c9afa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:21.707543    3951 certs.go:381] copying /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt.34d356ea -> /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt
	I0917 02:06:21.707724    3951 certs.go:385] copying /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea -> /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key
	I0917 02:06:21.707940    3951 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key
	I0917 02:06:21.707949    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:06:21.707971    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:06:21.707989    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:06:21.708013    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:06:21.708032    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 02:06:21.708050    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 02:06:21.708068    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 02:06:21.708087    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 02:06:21.708175    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:06:21.708221    3951 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:06:21.708229    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:06:21.708259    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:06:21.708290    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:06:21.708317    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:06:21.708378    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:06:21.708413    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:06:21.708433    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:06:21.708450    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:06:21.708936    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:06:21.731634    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:06:21.755014    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:06:21.780416    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:06:21.806686    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 02:06:21.830924    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 02:06:21.860517    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:06:21.883515    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 02:06:21.904458    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:06:21.940425    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:06:21.977748    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:06:22.031404    3951 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 02:06:22.066177    3951 ssh_runner.go:195] Run: openssl version
	I0917 02:06:22.070562    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:06:22.079011    3951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:06:22.082434    3951 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:06:22.082478    3951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:06:22.087894    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:06:22.096140    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:06:22.104466    3951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:06:22.107959    3951 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:06:22.107997    3951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:06:22.112345    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:06:22.120730    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:06:22.129508    3951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:06:22.133071    3951 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:06:22.133113    3951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:06:22.137371    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:06:22.145685    3951 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:06:22.149175    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:06:22.154152    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:06:22.158697    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:06:22.163258    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:06:22.167625    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:06:22.172054    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:06:22.176282    3951 kubeadm.go:392] StartCluster: {Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:06:22.176426    3951 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 02:06:22.189260    3951 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 02:06:22.196889    3951 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 02:06:22.196900    3951 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 02:06:22.196943    3951 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 02:06:22.204529    3951 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:06:22.204834    3951 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-857000" does not appear in /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:06:22.204922    3951 kubeconfig.go:62] /Users/jenkins/minikube-integration/19648-1025/kubeconfig needs updating (will repair): [kubeconfig missing "ha-857000" cluster setting kubeconfig missing "ha-857000" context setting]
	I0917 02:06:22.205131    3951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:22.205534    3951 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:06:22.205732    3951 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3bff720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:06:22.206065    3951 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 02:06:22.206259    3951 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 02:06:22.213498    3951 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0917 02:06:22.213509    3951 kubeadm.go:597] duration metric: took 16.605221ms to restartPrimaryControlPlane
	I0917 02:06:22.213515    3951 kubeadm.go:394] duration metric: took 37.238807ms to StartCluster
	I0917 02:06:22.213523    3951 settings.go:142] acquiring lock: {Name:mk5de70c670a23cf49fdbd19ddb5c1d2a9de9a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:22.213600    3951 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:06:22.213968    3951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:22.214179    3951 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:06:22.214192    3951 start.go:241] waiting for startup goroutines ...
	I0917 02:06:22.214206    3951 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 02:06:22.214324    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:22.256356    3951 out.go:177] * Enabled addons: 
	I0917 02:06:22.277195    3951 addons.go:510] duration metric: took 62.984897ms for enable addons: enabled=[]
	I0917 02:06:22.277282    3951 start.go:246] waiting for cluster config update ...
	I0917 02:06:22.277295    3951 start.go:255] writing updated cluster config ...
	I0917 02:06:22.300377    3951 out.go:201] 
	I0917 02:06:22.321646    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:22.321775    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:22.343932    3951 out.go:177] * Starting "ha-857000-m02" control-plane node in "ha-857000" cluster
	I0917 02:06:22.386310    3951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:06:22.386345    3951 cache.go:56] Caching tarball of preloaded images
	I0917 02:06:22.386520    3951 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:06:22.386539    3951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:06:22.386678    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:22.387665    3951 start.go:360] acquireMachinesLock for ha-857000-m02: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:06:22.387781    3951 start.go:364] duration metric: took 93.188µs to acquireMachinesLock for "ha-857000-m02"
	I0917 02:06:22.387807    3951 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:06:22.387815    3951 fix.go:54] fixHost starting: m02
	I0917 02:06:22.388245    3951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:06:22.388280    3951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:06:22.397656    3951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51937
	I0917 02:06:22.397993    3951 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:06:22.398338    3951 main.go:141] libmachine: Using API Version  1
	I0917 02:06:22.398355    3951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:06:22.398604    3951 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:06:22.398732    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:22.398839    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetState
	I0917 02:06:22.398926    3951 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:22.398995    3951 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 3905
	I0917 02:06:22.399925    3951 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid 3905 missing from process table
	I0917 02:06:22.399987    3951 fix.go:112] recreateIfNeeded on ha-857000-m02: state=Stopped err=<nil>
	I0917 02:06:22.400002    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	W0917 02:06:22.400097    3951 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:06:22.442146    3951 out.go:177] * Restarting existing hyperkit VM for "ha-857000-m02" ...
	I0917 02:06:22.463239    3951 main.go:141] libmachine: (ha-857000-m02) Calling .Start
	I0917 02:06:22.463548    3951 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:22.463605    3951 main.go:141] libmachine: (ha-857000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid
	I0917 02:06:22.465343    3951 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid 3905 missing from process table
	I0917 02:06:22.465354    3951 main.go:141] libmachine: (ha-857000-m02) DBG | pid 3905 is in state "Stopped"
	I0917 02:06:22.465372    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid...
	I0917 02:06:22.465746    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Using UUID 1940a045-0b1b-4b28-842d-d4858a62cbd3
	I0917 02:06:22.495548    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Generated MAC 9a:95:4e:4b:65:fe
	I0917 02:06:22.495583    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:06:22.495857    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1940a045-0b1b-4b28-842d-d4858a62cbd3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000383560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:06:22.495910    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1940a045-0b1b-4b28-842d-d4858a62cbd3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000383560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:06:22.496018    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1940a045-0b1b-4b28-842d-d4858a62cbd3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/ha-857000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machine
s/ha-857000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:06:22.496120    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1940a045-0b1b-4b28-842d-d4858a62cbd3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/ha-857000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:06:22.496143    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:06:22.497973    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: Pid is 3976
	I0917 02:06:22.498454    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Attempt 0
	I0917 02:06:22.498484    3951 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:22.498545    3951 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 3976
	I0917 02:06:22.500282    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Searching for 9a:95:4e:4b:65:fe in /var/db/dhcpd_leases ...
	I0917 02:06:22.500349    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:06:22.500372    3951 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea9805}
	I0917 02:06:22.500382    3951 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94661}
	I0917 02:06:22.500397    3951 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea97be}
	I0917 02:06:22.500410    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Found match: 9a:95:4e:4b:65:fe
	I0917 02:06:22.500437    3951 main.go:141] libmachine: (ha-857000-m02) DBG | IP: 192.169.0.6
	I0917 02:06:22.500486    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetConfigRaw
	I0917 02:06:22.501123    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:06:22.501362    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:22.501877    3951 machine.go:93] provisionDockerMachine start ...
	I0917 02:06:22.501887    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:22.502006    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:22.502140    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:22.502253    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:22.502355    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:22.502453    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:22.502592    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:22.502794    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:22.502804    3951 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:06:22.506011    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:06:22.516718    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:06:22.517536    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:06:22.517559    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:06:22.517587    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:06:22.517605    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:06:22.902525    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:06:22.902540    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:06:23.017245    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:06:23.017263    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:06:23.017272    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:06:23.017286    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:06:23.018137    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:06:23.018146    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:06:28.664665    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:06:28.664731    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:06:28.664739    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:06:28.688834    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:06:33.560885    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:06:33.560902    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:06:33.561080    3951 buildroot.go:166] provisioning hostname "ha-857000-m02"
	I0917 02:06:33.561088    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:06:33.561176    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.561264    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.561361    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.561457    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.561572    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.561724    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.561884    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.561894    3951 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000-m02 && echo "ha-857000-m02" | sudo tee /etc/hostname
	I0917 02:06:33.626435    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000-m02
	
	I0917 02:06:33.626450    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.626583    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.626692    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.626783    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.626875    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.627027    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.627173    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.627184    3951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:06:33.685124    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:06:33.685140    3951 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:06:33.685149    3951 buildroot.go:174] setting up certificates
	I0917 02:06:33.685155    3951 provision.go:84] configureAuth start
	I0917 02:06:33.685161    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:06:33.685285    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:06:33.685391    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.685472    3951 provision.go:143] copyHostCerts
	I0917 02:06:33.685505    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:06:33.685552    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:06:33.685558    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:06:33.685701    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:06:33.686213    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:06:33.686248    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:06:33.686252    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:06:33.686328    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:06:33.686464    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:06:33.686504    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:06:33.686509    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:06:33.686577    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:06:33.686713    3951 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000-m02 san=[127.0.0.1 192.169.0.6 ha-857000-m02 localhost minikube]
	I0917 02:06:33.724325    3951 provision.go:177] copyRemoteCerts
	I0917 02:06:33.724374    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:06:33.724388    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.724531    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.724628    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.724718    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.724808    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:06:33.757977    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:06:33.758053    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 02:06:33.777137    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:06:33.777203    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:06:33.796184    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:06:33.796248    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:06:33.815739    3951 provision.go:87] duration metric: took 130.575095ms to configureAuth
	I0917 02:06:33.815753    3951 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:06:33.815923    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:33.815937    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:33.816066    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.816180    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.816266    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.816357    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.816435    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.816546    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.816672    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.816679    3951 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:06:33.868528    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:06:33.868540    3951 buildroot.go:70] root file system type: tmpfs
	I0917 02:06:33.868626    3951 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:06:33.868638    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.868774    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.868862    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.868957    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.869038    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.869178    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.869313    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.869355    3951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:06:33.934180    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:06:33.934199    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.934331    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.934438    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.934537    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.934624    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.934753    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.934890    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.934902    3951 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:06:35.613474    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:06:35.613490    3951 machine.go:96] duration metric: took 13.111377814s to provisionDockerMachine
	I0917 02:06:35.613498    3951 start.go:293] postStartSetup for "ha-857000-m02" (driver="hyperkit")
	I0917 02:06:35.613517    3951 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:06:35.613531    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.613729    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:06:35.613743    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:35.613853    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.613946    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.614026    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.614114    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:06:35.652452    3951 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:06:35.656174    3951 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:06:35.656186    3951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:06:35.656273    3951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:06:35.656413    3951 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:06:35.656420    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:06:35.656581    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:06:35.665638    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:06:35.696288    3951 start.go:296] duration metric: took 82.770634ms for postStartSetup
	I0917 02:06:35.696319    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.696511    3951 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:06:35.696525    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:35.696625    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.696706    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.696794    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.696893    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:06:35.729642    3951 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:06:35.729708    3951 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:06:35.783199    3951 fix.go:56] duration metric: took 13.395150311s for fixHost
	I0917 02:06:35.783224    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:35.783375    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.783476    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.783551    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.783631    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.783768    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:35.783899    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:35.783906    3951 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:06:35.838274    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726563995.926909320
	
	I0917 02:06:35.838288    3951 fix.go:216] guest clock: 1726563995.926909320
	I0917 02:06:35.838293    3951 fix.go:229] Guest: 2024-09-17 02:06:35.92690932 -0700 PDT Remote: 2024-09-17 02:06:35.783213 -0700 PDT m=+32.178408818 (delta=143.69632ms)
	I0917 02:06:35.838302    3951 fix.go:200] guest clock delta is within tolerance: 143.69632ms
	I0917 02:06:35.838306    3951 start.go:83] releasing machines lock for "ha-857000-m02", held for 13.450280733s
	I0917 02:06:35.838324    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.838459    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:06:35.861800    3951 out.go:177] * Found network options:
	I0917 02:06:35.882860    3951 out.go:177]   - NO_PROXY=192.169.0.5
	W0917 02:06:35.903716    3951 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:06:35.903755    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.904608    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.904879    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.905023    3951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:06:35.905064    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	W0917 02:06:35.905084    3951 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:06:35.905192    3951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:06:35.905211    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:35.905229    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.905436    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.905470    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.905665    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.905679    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.905849    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.905865    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:06:35.905991    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	W0917 02:06:35.936887    3951 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:06:35.936958    3951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:06:36.007933    3951 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:06:36.007953    3951 start.go:495] detecting cgroup driver to use...
	I0917 02:06:36.008056    3951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:06:36.024338    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:06:36.033262    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:06:36.042136    3951 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:06:36.042188    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:06:36.050818    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:06:36.059619    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:06:36.068394    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:06:36.077285    3951 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:06:36.086317    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:06:36.094948    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:06:36.103691    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:06:36.112538    3951 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:06:36.120508    3951 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:06:36.128434    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:36.230022    3951 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:06:36.250428    3951 start.go:495] detecting cgroup driver to use...
	I0917 02:06:36.250505    3951 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:06:36.273190    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:06:36.285496    3951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:06:36.303235    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:06:36.314994    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:06:36.325990    3951 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:06:36.351133    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:06:36.362290    3951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:06:36.377230    3951 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:06:36.380093    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:06:36.387911    3951 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:06:36.401199    3951 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:06:36.507714    3951 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:06:36.609258    3951 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:06:36.609285    3951 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:06:36.623332    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:36.718880    3951 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:07:37.748739    3951 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.028781405s)
	I0917 02:07:37.748815    3951 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0917 02:07:37.786000    3951 out.go:201] 
	W0917 02:07:37.809190    3951 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 09:06:34 ha-857000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 09:06:34 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:34.324120961Z" level=info msg="Starting up"
	Sep 17 09:06:34 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:34.324775253Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 09:06:34 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:34.325518826Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=488
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.341058185Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356213648Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356261078Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356303349Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356313782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356436154Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356475371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356593098Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356628148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356640458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356648167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356767218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356926440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358525862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358564683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358679405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358712925Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358797431Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358843725Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.360911977Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.360974504Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361053471Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361068314Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361078324Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361121426Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361365784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361471567Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361506271Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361517719Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361527110Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361535526Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361543621Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361552701Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361562674Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361570939Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361578985Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361588503Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361603316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361612406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361620269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361628602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361638647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361646859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361654306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361662885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361671295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361681400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361690597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361698250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361705966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361720758Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361737654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361746364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361754112Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361847279Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361861726Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361869503Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361877991Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361885443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361899338Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361911740Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362480967Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362549430Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362632268Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362920029Z" level=info msg="containerd successfully booted in 0.022632s"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.344850604Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.385337180Z" level=info msg="Loading containers: start."
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.568192740Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.627785197Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.670471622Z" level=info msg="Loading containers: done."
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.677239663Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.677408183Z" level=info msg="Daemon has completed initialization"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.699597178Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 09:06:35 ha-857000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.704823863Z" level=info msg="API listen on [::]:2376"
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.821530126Z" level=info msg="Processing signal 'terminated'"
	Sep 17 09:06:36 ha-857000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.822577679Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.823011519Z" level=info msg="Daemon shutdown complete"
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.823037716Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.823053677Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 09:06:37 ha-857000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 09:06:37 ha-857000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 09:06:37 ha-857000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 09:06:37 ha-857000-m02 dockerd[1158]: time="2024-09-17T09:06:37.864990112Z" level=info msg="Starting up"
	Sep 17 09:07:37 ha-857000-m02 dockerd[1158]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 09:07:37 ha-857000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 09:07:37 ha-857000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 09:07:37 ha-857000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0917 02:07:37.809292    3951 out.go:270] * 
	W0917 02:07:37.810458    3951 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:07:37.874286    3951 out.go:201] 
	
	
	==> Docker <==
	Sep 17 09:06:28 ha-857000 dockerd[1182]: time="2024-09-17T09:06:28.953788653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:06:50 ha-857000 dockerd[1176]: time="2024-09-17T09:06:50.570882674Z" level=info msg="ignoring event" container=6dbb2f7111cc445377bba802440bf8e10f56f3e5a0e88f69f41d840619ffa219 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 09:06:50 ha-857000 dockerd[1182]: time="2024-09-17T09:06:50.571701735Z" level=info msg="shim disconnected" id=6dbb2f7111cc445377bba802440bf8e10f56f3e5a0e88f69f41d840619ffa219 namespace=moby
	Sep 17 09:06:50 ha-857000 dockerd[1182]: time="2024-09-17T09:06:50.571758895Z" level=warning msg="cleaning up after shim disconnected" id=6dbb2f7111cc445377bba802440bf8e10f56f3e5a0e88f69f41d840619ffa219 namespace=moby
	Sep 17 09:06:50 ha-857000 dockerd[1182]: time="2024-09-17T09:06:50.571767359Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 09:06:51 ha-857000 dockerd[1176]: time="2024-09-17T09:06:51.580125433Z" level=info msg="ignoring event" container=f1252151601a7acad5aef9c02d49be638390576e96a0ad87836a6b56f813f5c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 09:06:51 ha-857000 dockerd[1182]: time="2024-09-17T09:06:51.581344041Z" level=info msg="shim disconnected" id=f1252151601a7acad5aef9c02d49be638390576e96a0ad87836a6b56f813f5c1 namespace=moby
	Sep 17 09:06:51 ha-857000 dockerd[1182]: time="2024-09-17T09:06:51.581601552Z" level=warning msg="cleaning up after shim disconnected" id=f1252151601a7acad5aef9c02d49be638390576e96a0ad87836a6b56f813f5c1 namespace=moby
	Sep 17 09:06:51 ha-857000 dockerd[1182]: time="2024-09-17T09:06:51.581639267Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.085279461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.085342970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.085355817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.085528340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.087547026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.087599271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.087608710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.087706284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:07:33 ha-857000 dockerd[1176]: time="2024-09-17T09:07:33.582121952Z" level=info msg="ignoring event" container=2d39a363ecf53c7e25c28e0097a1b3c6a6ee70ae1a6443746c0bee850e42b437 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 09:07:33 ha-857000 dockerd[1182]: time="2024-09-17T09:07:33.583058738Z" level=info msg="shim disconnected" id=2d39a363ecf53c7e25c28e0097a1b3c6a6ee70ae1a6443746c0bee850e42b437 namespace=moby
	Sep 17 09:07:33 ha-857000 dockerd[1182]: time="2024-09-17T09:07:33.583223961Z" level=warning msg="cleaning up after shim disconnected" id=2d39a363ecf53c7e25c28e0097a1b3c6a6ee70ae1a6443746c0bee850e42b437 namespace=moby
	Sep 17 09:07:33 ha-857000 dockerd[1182]: time="2024-09-17T09:07:33.583260138Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 09:07:34 ha-857000 dockerd[1176]: time="2024-09-17T09:07:34.599859784Z" level=info msg="ignoring event" container=5043e9bda2acca013e777653865484f1467f9346624b61dd9e0a64afb712c761 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 09:07:34 ha-857000 dockerd[1182]: time="2024-09-17T09:07:34.601045096Z" level=info msg="shim disconnected" id=5043e9bda2acca013e777653865484f1467f9346624b61dd9e0a64afb712c761 namespace=moby
	Sep 17 09:07:34 ha-857000 dockerd[1182]: time="2024-09-17T09:07:34.601095683Z" level=warning msg="cleaning up after shim disconnected" id=5043e9bda2acca013e777653865484f1467f9346624b61dd9e0a64afb712c761 namespace=moby
	Sep 17 09:07:34 ha-857000 dockerd[1182]: time="2024-09-17T09:07:34.601106271Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2d39a363ecf53       6bab7719df100                                                                                         28 seconds ago       Exited              kube-apiserver            2                   d1c62bd0a7eda       kube-apiserver-ha-857000
	5043e9bda2acc       175ffd71cce3d                                                                                         28 seconds ago       Exited              kube-controller-manager   2                   d830cb545033a       kube-controller-manager-ha-857000
	034279696db8f       38af8ddebf499                                                                                         About a minute ago   Running             kube-vip                  0                   4205e70bfa1bb       kube-vip-ha-857000
	d9fae1497b048       9aa1fad941575                                                                                         About a minute ago   Running             kube-scheduler            1                   37d9fe68f2e59       kube-scheduler-ha-857000
	f4f59b8c76404       2e96e5913fc06                                                                                         About a minute ago   Running             etcd                      1                   a23094a650513       etcd-ha-857000
	fe908ac73b00f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago        Exited              busybox                   0                   80864159ef38e       busybox-7dff88458-4jzg8
	521527f17691c       c69fa2e9cbf5f                                                                                         6 minutes ago        Exited              coredns                   0                   aa21641a5b16e       coredns-7c65d6cfc9-nl5j5
	f991c8e956d90       c69fa2e9cbf5f                                                                                         6 minutes ago        Exited              coredns                   0                   da08087b51cd9       coredns-7c65d6cfc9-fg65r
	611759af4bf7a       6e38f40d628db                                                                                         6 minutes ago        Exited              storage-provisioner       0                   08dee0a668f3d       storage-provisioner
	5d84a01abd3e7       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              6 minutes ago        Exited              kindnet-cni               0                   38db6fab73655       kindnet-7pf7v
	0b03e5e488939       60c005f310ff3                                                                                         6 minutes ago        Exited              kube-proxy                0                   067bc1b2ad7fa       kube-proxy-vskbj
	fcb7038a6ac9e       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago        Exited              kube-vip                  0                   b74867bd31c54       kube-vip-ha-857000
	2da1b67c167c6       9aa1fad941575                                                                                         6 minutes ago        Exited              kube-scheduler            0                   f2b2b320ed41a       kube-scheduler-ha-857000
	6989933ec650e       2e96e5913fc06                                                                                         6 minutes ago        Exited              etcd                      0                   43536bf53cbec       etcd-ha-857000
	
	
	==> coredns [521527f17691] <==
	[INFO] 10.244.2.2:33230 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100028s
	[INFO] 10.244.2.2:37727 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077614s
	[INFO] 10.244.2.2:51233 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090375s
	[INFO] 10.244.1.2:43082 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115984s
	[INFO] 10.244.1.2:45048 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000071244s
	[INFO] 10.244.1.2:48877 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106601s
	[INFO] 10.244.1.2:59235 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000068348s
	[INFO] 10.244.1.2:53808 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064222s
	[INFO] 10.244.1.2:54982 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064992s
	[INFO] 10.244.0.4:59177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012236s
	[INFO] 10.244.0.4:59468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096608s
	[INFO] 10.244.0.4:49953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108018s
	[INFO] 10.244.2.2:36658 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081427s
	[INFO] 10.244.1.2:53166 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140458s
	[INFO] 10.244.1.2:60442 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069729s
	[INFO] 10.244.0.4:60564 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007076s
	[INFO] 10.244.0.4:57696 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000125726s
	[INFO] 10.244.2.2:33447 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114855s
	[INFO] 10.244.2.2:49647 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000058138s
	[INFO] 10.244.2.2:55869 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00009725s
	[INFO] 10.244.1.2:49826 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096631s
	[INFO] 10.244.1.2:33376 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000046366s
	[INFO] 10.244.1.2:47184 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000084276s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f991c8e956d9] <==
	[INFO] 10.244.1.2:36169 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206963s
	[INFO] 10.244.1.2:33814 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000088589s
	[INFO] 10.244.1.2:57385 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.000535008s
	[INFO] 10.244.0.4:54856 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135529s
	[INFO] 10.244.0.4:47831 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.019088159s
	[INFO] 10.244.0.4:46325 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201714s
	[INFO] 10.244.0.4:45239 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000255383s
	[INFO] 10.244.0.4:55042 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000141827s
	[INFO] 10.244.2.2:47888 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108401s
	[INFO] 10.244.2.2:41486 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00044994s
	[INFO] 10.244.2.2:50623 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082841s
	[INFO] 10.244.1.2:54143 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098038s
	[INFO] 10.244.1.2:38802 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046632s
	[INFO] 10.244.0.4:39532 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.002579505s
	[INFO] 10.244.2.2:53978 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077749s
	[INFO] 10.244.2.2:60710 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092889s
	[INFO] 10.244.2.2:51255 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000044117s
	[INFO] 10.244.1.2:36996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056219s
	[INFO] 10.244.1.2:39487 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090704s
	[INFO] 10.244.0.4:39831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131192s
	[INFO] 10.244.0.4:35770 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154922s
	[INFO] 10.244.2.2:45820 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113973s
	[INFO] 10.244.1.2:44519 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120184s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0917 09:07:42.167535    2773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 09:07:42.169513    2773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 09:07:42.170918    2773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 09:07:42.172489    2773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 09:07:42.173686    2773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035496] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007916] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.708278] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007008] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.718295] systemd-fstab-generator[126]: Ignoring "noauto" option for root device
	[  +2.225909] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.454376] systemd-fstab-generator[462]: Ignoring "noauto" option for root device
	[  +0.098861] systemd-fstab-generator[474]: Ignoring "noauto" option for root device
	[  +1.963292] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +0.257545] systemd-fstab-generator[1142]: Ignoring "noauto" option for root device
	[  +0.117262] systemd-fstab-generator[1154]: Ignoring "noauto" option for root device
	[  +0.053463] kauditd_printk_skb: 123 callbacks suppressed
	[  +0.056651] systemd-fstab-generator[1168]: Ignoring "noauto" option for root device
	[  +2.442306] systemd-fstab-generator[1383]: Ignoring "noauto" option for root device
	[  +0.098086] systemd-fstab-generator[1395]: Ignoring "noauto" option for root device
	[  +0.113966] systemd-fstab-generator[1407]: Ignoring "noauto" option for root device
	[  +0.114036] systemd-fstab-generator[1422]: Ignoring "noauto" option for root device
	[  +0.434156] systemd-fstab-generator[1582]: Ignoring "noauto" option for root device
	[  +6.997669] kauditd_printk_skb: 190 callbacks suppressed
	[ +21.952863] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [6989933ec650] <==
	2024/09/17 09:05:55 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T09:05:55.944117Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.248286841s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T09:05:55.944127Z","caller":"traceutil/trace.go:171","msg":"trace[182147551] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; }","duration":"5.248299338s","start":"2024-09-17T09:05:50.695825Z","end":"2024-09-17T09:05:55.944124Z","steps":["trace[182147551] 'agreement among raft nodes before linearized reading'  (duration: 5.248286916s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T09:05:55.944136Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T09:05:50.695789Z","time spent":"5.248344269s","remote":"127.0.0.1:52050","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":0,"response size":0,"request content":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true "}
	2024/09/17 09:05:55 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T09:05:55.984755Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T09:05:55.984786Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-17T09:05:55.984817Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-17T09:05:55.987724Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.987747Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.988090Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.988144Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.988200Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.988245Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.988255Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.988259Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.988265Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.988292Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.988663Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.988686Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.988708Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.988717Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.991208Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-17T09:05:55.991249Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-17T09:05:55.991256Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-857000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [f4f59b8c7640] <==
	{"level":"warn","ts":"2024-09-17T09:07:39.221977Z","caller":"etcdserver/server.go:2139","msg":"failed to publish local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-857000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-09-17T09:07:39.236049Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ff70cdb626651bff","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:07:39.236062Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ff70cdb626651bff","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:07:39.248522Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-17T09:07:39.248560Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
	{"level":"info","ts":"2024-09-17T09:07:39.370293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:39.370405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:39.370425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:39.370451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:39.370466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"warn","ts":"2024-09-17T09:07:39.712637Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275663,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:07:40.191771Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-09-17T09:07:40.191890Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.001077849s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-09-17T09:07:40.191922Z","caller":"traceutil/trace.go:171","msg":"trace[1478188777] range","detail":"{range_begin:; range_end:; }","duration":"7.001123731s","start":"2024-09-17T09:07:33.190787Z","end":"2024-09-17T09:07:40.191911Z","steps":["trace[1478188777] 'agreement among raft nodes before linearized reading'  (duration: 7.001076442s)"],"step_count":1}
	{"level":"error","ts":"2024-09-17T09:07:40.192256Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: request timed out\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-17T09:07:40.670293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:40.670335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:40.670344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:40.670354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:40.670360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:41.971244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:41.971287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:41.971302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:41.971320Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:41.971331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	
	
	==> kernel <==
	 09:07:42 up 1 min,  0 users,  load average: 0.24, 0.10, 0.04
	Linux ha-857000 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5d84a01abd3e] <==
	I0917 09:05:22.964948       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:32.966280       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:32.966503       1 main.go:299] handling current node
	I0917 09:05:32.966605       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:32.966739       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:32.966951       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:32.967059       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:32.967333       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:32.967449       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:42.964585       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:42.964999       1 main.go:299] handling current node
	I0917 09:05:42.965252       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:42.965422       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:42.965746       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:42.965829       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:42.966204       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:42.966357       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:52.965279       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:52.965376       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:52.965533       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:52.965592       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:52.965673       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:52.965753       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:52.965812       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:52.965902       1 main.go:299] handling current node
	
	
	==> kube-apiserver [2d39a363ecf5] <==
	I0917 09:07:13.208670       1 options.go:228] external host was not specified, using 192.169.0.5
	I0917 09:07:13.210069       1 server.go:142] Version: v1.31.1
	I0917 09:07:13.210101       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:07:13.559096       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0917 09:07:13.563623       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0917 09:07:13.563675       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0917 09:07:13.563832       1 instance.go:232] Using reconciler: lease
	I0917 09:07:13.564198       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0917 09:07:33.559987       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0917 09:07:33.560076       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0917 09:07:33.564856       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0917 09:07:33.565081       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [5043e9bda2ac] <==
	I0917 09:07:13.602574       1 serving.go:386] Generated self-signed cert in-memory
	I0917 09:07:13.965025       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0917 09:07:13.965059       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:07:13.966263       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0917 09:07:13.966394       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 09:07:13.966278       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 09:07:13.966269       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0917 09:07:34.581719       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-proxy [0b03e5e48893] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 09:00:59.069869       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 09:00:59.079118       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 09:00:59.079199       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 09:00:59.109184       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 09:00:59.109227       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 09:00:59.109245       1 server_linux.go:169] "Using iptables Proxier"
	I0917 09:00:59.111661       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 09:00:59.111847       1 server.go:483] "Version info" version="v1.31.1"
	I0917 09:00:59.111876       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:00:59.112952       1 config.go:199] "Starting service config controller"
	I0917 09:00:59.112979       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 09:00:59.112995       1 config.go:105] "Starting endpoint slice config controller"
	I0917 09:00:59.112998       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 09:00:59.113603       1 config.go:328] "Starting node config controller"
	I0917 09:00:59.113673       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 09:00:59.213587       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 09:00:59.213649       1 shared_informer.go:320] Caches are synced for service config
	I0917 09:00:59.213808       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2da1b67c167c] <==
	E0917 09:03:33.866320       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 887505bd-cf68-4e77-be17-99550df4b4b4(default/busybox-7dff88458-4jzg8) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-4jzg8"
	E0917 09:03:33.866475       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4jzg8\": pod busybox-7dff88458-4jzg8 is already assigned to node \"ha-857000\"" pod="default/busybox-7dff88458-4jzg8"
	I0917 09:03:33.866490       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-4jzg8" node="ha-857000"
	E0917 09:03:33.876570       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5x9l8\": pod busybox-7dff88458-5x9l8 is already assigned to node \"ha-857000-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-5x9l8" node="ha-857000-m03"
	E0917 09:03:33.876627       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod dfc21081-4b44-4f15-9713-8dbd1797a985(default/busybox-7dff88458-5x9l8) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-5x9l8"
	E0917 09:03:33.876641       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5x9l8\": pod busybox-7dff88458-5x9l8 is already assigned to node \"ha-857000-m03\"" pod="default/busybox-7dff88458-5x9l8"
	I0917 09:03:33.876653       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-5x9l8" node="ha-857000-m03"
	E0917 09:04:05.799466       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-zchkt\": pod kube-proxy-zchkt is already assigned to node \"ha-857000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-zchkt" node="ha-857000-m04"
	E0917 09:04:05.799587       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c03ed58b-9571-4d9e-bb6b-c12332f7766a(kube-system/kube-proxy-zchkt) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-zchkt"
	E0917 09:04:05.799651       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-zchkt\": pod kube-proxy-zchkt is already assigned to node \"ha-857000-m04\"" pod="kube-system/kube-proxy-zchkt"
	I0917 09:04:05.799843       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-zchkt" node="ha-857000-m04"
	E0917 09:04:05.810597       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4jk9v\": pod kindnet-4jk9v is already assigned to node \"ha-857000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4jk9v" node="ha-857000-m04"
	E0917 09:04:05.810752       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 24a018c6-9cbb-4d17-a295-8fef456534a0(kube-system/kindnet-4jk9v) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4jk9v"
	E0917 09:04:05.811044       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4jk9v\": pod kindnet-4jk9v is already assigned to node \"ha-857000-m04\"" pod="kube-system/kindnet-4jk9v"
	I0917 09:04:05.811236       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4jk9v" node="ha-857000-m04"
	E0917 09:04:05.816361       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-q5f2s\": pod kindnet-q5f2s is already assigned to node \"ha-857000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-q5f2s" node="ha-857000-m04"
	E0917 09:04:05.816486       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-q5f2s\": pod kindnet-q5f2s is already assigned to node \"ha-857000-m04\"" pod="kube-system/kindnet-q5f2s"
	E0917 09:04:05.829276       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tbbh2\": pod kindnet-tbbh2 is already assigned to node \"ha-857000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-tbbh2" node="ha-857000-m04"
	E0917 09:04:05.829463       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a3360b43-cfb5-45f5-9de3-cb8bfd82ac14(kube-system/kindnet-tbbh2) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-tbbh2"
	E0917 09:04:05.829578       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tbbh2\": pod kindnet-tbbh2 is already assigned to node \"ha-857000-m04\"" pod="kube-system/kindnet-tbbh2"
	I0917 09:04:05.829611       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-tbbh2" node="ha-857000-m04"
	I0917 09:05:55.853932       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 09:05:55.858618       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0917 09:05:55.858815       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 09:05:55.881585       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d9fae1497b04] <==
	E0917 09:07:31.901651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 09:07:32.071397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 09:07:32.071649       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 09:07:34.580116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33886->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.580206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33886->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.580455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42496->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.580535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42496->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.580863       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33904->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.580996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33904->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.581303       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33898->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.581360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33898->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.581744       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33862->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.582050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33862->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.582256       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33854->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.582356       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33854->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.582989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42470->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.583036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42470->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.583221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42488->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.583273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42488->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.583499       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33884->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.583538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33884->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.583720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42464->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.583760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42464->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.583992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42512->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.584033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42512->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 17 09:07:33 ha-857000 kubelet[1589]: I0917 09:07:33.850461    1589 scope.go:117] "RemoveContainer" containerID="6dbb2f7111cc445377bba802440bf8e10f56f3e5a0e88f69f41d840619ffa219"
	Sep 17 09:07:33 ha-857000 kubelet[1589]: I0917 09:07:33.851330    1589 scope.go:117] "RemoveContainer" containerID="2d39a363ecf53c7e25c28e0097a1b3c6a6ee70ae1a6443746c0bee850e42b437"
	Sep 17 09:07:33 ha-857000 kubelet[1589]: E0917 09:07:33.851432    1589 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-857000_kube-system(f9e4594843635b7ed6662a3474d619e9)\"" pod="kube-system/kube-apiserver-ha-857000" podUID="f9e4594843635b7ed6662a3474d619e9"
	Sep 17 09:07:34 ha-857000 kubelet[1589]: I0917 09:07:34.867010    1589 scope.go:117] "RemoveContainer" containerID="f1252151601a7acad5aef9c02d49be638390576e96a0ad87836a6b56f813f5c1"
	Sep 17 09:07:34 ha-857000 kubelet[1589]: I0917 09:07:34.868149    1589 scope.go:117] "RemoveContainer" containerID="5043e9bda2acca013e777653865484f1467f9346624b61dd9e0a64afb712c761"
	Sep 17 09:07:34 ha-857000 kubelet[1589]: E0917 09:07:34.868227    1589 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857000_kube-system(2e359dfc5a9c04b45f1f4ad5b0c126ca)\"" pod="kube-system/kube-controller-manager-ha-857000" podUID="2e359dfc5a9c04b45f1f4ad5b0c126ca"
	Sep 17 09:07:35 ha-857000 kubelet[1589]: E0917 09:07:35.692107    1589 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-857000.17f5fccb3d09c90c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-857000,UID:ha-857000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-857000,},FirstTimestamp:2024-09-17 09:06:21.999065356 +0000 UTC m=+0.224912397,LastTimestamp:2024-09-17 09:06:21.999065356 +0000 UTC m=+0.224912397,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-857000,}"
	Sep 17 09:07:36 ha-857000 kubelet[1589]: I0917 09:07:36.557057    1589 kubelet_node_status.go:72] "Attempting to register node" node="ha-857000"
	Sep 17 09:07:37 ha-857000 kubelet[1589]: I0917 09:07:37.880843    1589 scope.go:117] "RemoveContainer" containerID="5043e9bda2acca013e777653865484f1467f9346624b61dd9e0a64afb712c761"
	Sep 17 09:07:37 ha-857000 kubelet[1589]: E0917 09:07:37.881410    1589 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857000_kube-system(2e359dfc5a9c04b45f1f4ad5b0c126ca)\"" pod="kube-system/kube-controller-manager-ha-857000" podUID="2e359dfc5a9c04b45f1f4ad5b0c126ca"
	Sep 17 09:07:38 ha-857000 kubelet[1589]: E0917 09:07:38.763996    1589 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-857000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 17 09:07:38 ha-857000 kubelet[1589]: W0917 09:07:38.764000    1589 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 17 09:07:38 ha-857000 kubelet[1589]: E0917 09:07:38.764047    1589 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-857000"
	Sep 17 09:07:38 ha-857000 kubelet[1589]: E0917 09:07:38.764044    1589 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 17 09:07:38 ha-857000 kubelet[1589]: I0917 09:07:38.848033    1589 scope.go:117] "RemoveContainer" containerID="2d39a363ecf53c7e25c28e0097a1b3c6a6ee70ae1a6443746c0bee850e42b437"
	Sep 17 09:07:38 ha-857000 kubelet[1589]: E0917 09:07:38.848255    1589 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-857000_kube-system(f9e4594843635b7ed6662a3474d619e9)\"" pod="kube-system/kube-apiserver-ha-857000" podUID="f9e4594843635b7ed6662a3474d619e9"
	Sep 17 09:07:40 ha-857000 kubelet[1589]: I0917 09:07:40.089264    1589 scope.go:117] "RemoveContainer" containerID="5043e9bda2acca013e777653865484f1467f9346624b61dd9e0a64afb712c761"
	Sep 17 09:07:40 ha-857000 kubelet[1589]: E0917 09:07:40.089464    1589 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857000_kube-system(2e359dfc5a9c04b45f1f4ad5b0c126ca)\"" pod="kube-system/kube-controller-manager-ha-857000" podUID="2e359dfc5a9c04b45f1f4ad5b0c126ca"
	Sep 17 09:07:40 ha-857000 kubelet[1589]: I0917 09:07:40.554783    1589 scope.go:117] "RemoveContainer" containerID="2d39a363ecf53c7e25c28e0097a1b3c6a6ee70ae1a6443746c0bee850e42b437"
	Sep 17 09:07:40 ha-857000 kubelet[1589]: E0917 09:07:40.554930    1589 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-857000_kube-system(f9e4594843635b7ed6662a3474d619e9)\"" pod="kube-system/kube-apiserver-ha-857000" podUID="f9e4594843635b7ed6662a3474d619e9"
	Sep 17 09:07:41 ha-857000 kubelet[1589]: W0917 09:07:41.836670    1589 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-857000&limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 17 09:07:41 ha-857000 kubelet[1589]: W0917 09:07:41.836670    1589 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 17 09:07:41 ha-857000 kubelet[1589]: E0917 09:07:41.836720    1589 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-857000&limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 17 09:07:41 ha-857000 kubelet[1589]: E0917 09:07:41.836737    1589 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 17 09:07:42 ha-857000 kubelet[1589]: E0917 09:07:42.091554    1589 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-857000\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-857000 -n ha-857000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-857000 -n ha-857000: exit status 2 (149.648767ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-857000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (2.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:413: expected profile "ha-857000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-857000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-857000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-857000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Ku
bernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\"
:false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP
\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-857000 -n ha-857000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-857000 -n ha-857000: exit status 2 (147.1555ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-857000 logs -n 25: (2.129043264s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m02 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m03_ha-857000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m03:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04:/home/docker/cp-test_ha-857000-m03_ha-857000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m04 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m03_ha-857000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-857000 cp testdata/cp-test.txt                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3490570692/001/cp-test_ha-857000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000:/home/docker/cp-test_ha-857000-m04_ha-857000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000 sudo cat                                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m02:/home/docker/cp-test_ha-857000-m04_ha-857000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m02 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m03:/home/docker/cp-test_ha-857000-m04_ha-857000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m03 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-857000 node stop m02 -v=7                                                                                                 | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-857000 node start m02 -v=7                                                                                                | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:05 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-857000 -v=7                                                                                                       | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:05 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-857000 -v=7                                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:05 PDT | 17 Sep 24 02:06 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-857000 --wait=true -v=7                                                                                                | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:06 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-857000                                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:07 PDT |                     |
	| node    | ha-857000 node delete m03 -v=7                                                                                               | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:07 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 02:06:03
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 02:06:03.640574    3951 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:06:03.641305    3951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:06:03.641314    3951 out.go:358] Setting ErrFile to fd 2...
	I0917 02:06:03.641320    3951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:06:03.641922    3951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:06:03.643438    3951 out.go:352] Setting JSON to false
	I0917 02:06:03.667323    3951 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2133,"bootTime":1726561830,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 02:06:03.667643    3951 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:06:03.689297    3951 out.go:177] * [ha-857000] minikube v1.34.0 on Darwin 14.6.1
	I0917 02:06:03.731193    3951 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:06:03.731279    3951 notify.go:220] Checking for updates...
	I0917 02:06:03.773863    3951 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:06:03.794994    3951 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 02:06:03.815992    3951 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:06:03.837103    3951 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:06:03.858226    3951 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:06:03.879788    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:03.879962    3951 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:06:03.880706    3951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:06:03.880768    3951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:06:03.890269    3951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51913
	I0917 02:06:03.890631    3951 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:06:03.891014    3951 main.go:141] libmachine: Using API Version  1
	I0917 02:06:03.891039    3951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:06:03.891290    3951 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:06:03.891417    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:03.920139    3951 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 02:06:03.941013    3951 start.go:297] selected driver: hyperkit
	I0917 02:06:03.941066    3951 start.go:901] validating driver "hyperkit" against &{Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:06:03.941369    3951 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:06:03.941551    3951 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:06:03.941770    3951 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19648-1025/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 02:06:03.951375    3951 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 02:06:03.956115    3951 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:06:03.956133    3951 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 02:06:03.959464    3951 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:06:03.959502    3951 cni.go:84] Creating CNI manager for ""
	I0917 02:06:03.959545    3951 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 02:06:03.959620    3951 start.go:340] cluster config:
	{Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:06:03.959742    3951 iso.go:125] acquiring lock: {Name:mkc407d80a0f8e78f0e24d63c464bb315c80139b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:06:04.002033    3951 out.go:177] * Starting "ha-857000" primary control-plane node in "ha-857000" cluster
	I0917 02:06:04.022857    3951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:06:04.022895    3951 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 02:06:04.022909    3951 cache.go:56] Caching tarball of preloaded images
	I0917 02:06:04.023022    3951 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:06:04.023030    3951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:06:04.023135    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:04.023618    3951 start.go:360] acquireMachinesLock for ha-857000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:06:04.023673    3951 start.go:364] duration metric: took 42.184µs to acquireMachinesLock for "ha-857000"
	I0917 02:06:04.023691    3951 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:06:04.023701    3951 fix.go:54] fixHost starting: 
	I0917 02:06:04.023937    3951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:06:04.023964    3951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:06:04.032560    3951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51915
	I0917 02:06:04.032902    3951 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:06:04.033222    3951 main.go:141] libmachine: Using API Version  1
	I0917 02:06:04.033234    3951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:06:04.033482    3951 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:06:04.033595    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:04.033680    3951 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:06:04.033773    3951 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:04.033830    3951 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 3402
	I0917 02:06:04.034740    3951 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid 3402 missing from process table
	I0917 02:06:04.034780    3951 fix.go:112] recreateIfNeeded on ha-857000: state=Stopped err=<nil>
	I0917 02:06:04.034806    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	W0917 02:06:04.034888    3951 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:06:04.077159    3951 out.go:177] * Restarting existing hyperkit VM for "ha-857000" ...
	I0917 02:06:04.097853    3951 main.go:141] libmachine: (ha-857000) Calling .Start
	I0917 02:06:04.098040    3951 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:04.098062    3951 main.go:141] libmachine: (ha-857000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid
	I0917 02:06:04.099681    3951 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid 3402 missing from process table
	I0917 02:06:04.099693    3951 main.go:141] libmachine: (ha-857000) DBG | pid 3402 is in state "Stopped"
	I0917 02:06:04.099713    3951 main.go:141] libmachine: (ha-857000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid...
	I0917 02:06:04.100071    3951 main.go:141] libmachine: (ha-857000) DBG | Using UUID f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c
	I0917 02:06:04.220854    3951 main.go:141] libmachine: (ha-857000) DBG | Generated MAC c2:63:2b:63:80:76
	I0917 02:06:04.220886    3951 main.go:141] libmachine: (ha-857000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:06:04.221000    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6870)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:06:04.221030    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6870)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:06:04.221075    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/ha-857000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:06:04.221122    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/ha-857000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:06:04.221130    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:06:04.222561    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 DEBUG: hyperkit: Pid is 3964
	I0917 02:06:04.222927    3951 main.go:141] libmachine: (ha-857000) DBG | Attempt 0
	I0917 02:06:04.222940    3951 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:04.222982    3951 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 3964
	I0917 02:06:04.224835    3951 main.go:141] libmachine: (ha-857000) DBG | Searching for c2:63:2b:63:80:76 in /var/db/dhcpd_leases ...
	I0917 02:06:04.224889    3951 main.go:141] libmachine: (ha-857000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:06:04.224918    3951 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94661}
	I0917 02:06:04.224931    3951 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea97be}
	I0917 02:06:04.224951    3951 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9724}
	I0917 02:06:04.224959    3951 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea96ad}
	I0917 02:06:04.224964    3951 main.go:141] libmachine: (ha-857000) DBG | Found match: c2:63:2b:63:80:76
	I0917 02:06:04.224968    3951 main.go:141] libmachine: (ha-857000) DBG | IP: 192.169.0.5
	I0917 02:06:04.225012    3951 main.go:141] libmachine: (ha-857000) Calling .GetConfigRaw
	I0917 02:06:04.225649    3951 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:06:04.225875    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:04.226292    3951 machine.go:93] provisionDockerMachine start ...
	I0917 02:06:04.226303    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:04.226417    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:04.226547    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:04.226663    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:04.226797    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:04.226907    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:04.227062    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:04.227266    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:04.227274    3951 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:06:04.230562    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:06:04.281228    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:06:04.281906    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:06:04.281925    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:06:04.281932    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:06:04.281939    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:06:04.662879    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:06:04.662893    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:06:04.777528    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:06:04.777548    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:06:04.777560    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:06:04.777595    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:06:04.778494    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:06:04.778504    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:06:10.382594    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:10 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 02:06:10.382613    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:10 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 02:06:10.382641    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:10 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 02:06:10.407226    3951 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:06:10 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 02:06:15.292530    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:06:15.292580    3951 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:06:15.292726    3951 buildroot.go:166] provisioning hostname "ha-857000"
	I0917 02:06:15.292736    3951 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:06:15.292849    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.293003    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.293094    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.293188    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.293326    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.293545    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.293705    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.293713    3951 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000 && echo "ha-857000" | sudo tee /etc/hostname
	I0917 02:06:15.366591    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000
	
	I0917 02:06:15.366612    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.366751    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.366847    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.366940    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.367034    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.367186    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.367320    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.367331    3951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:06:15.430651    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:06:15.430671    3951 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:06:15.430688    3951 buildroot.go:174] setting up certificates
	I0917 02:06:15.430697    3951 provision.go:84] configureAuth start
	I0917 02:06:15.430705    3951 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:06:15.430833    3951 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:06:15.430948    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.431043    3951 provision.go:143] copyHostCerts
	I0917 02:06:15.431073    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:06:15.431127    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:06:15.431135    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:06:15.431279    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:06:15.431473    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:06:15.431502    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:06:15.431506    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:06:15.431572    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:06:15.431702    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:06:15.431739    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:06:15.431744    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:06:15.431808    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:06:15.431954    3951 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000 san=[127.0.0.1 192.169.0.5 ha-857000 localhost minikube]
	I0917 02:06:15.502156    3951 provision.go:177] copyRemoteCerts
	I0917 02:06:15.502214    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:06:15.502227    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.502353    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.502455    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.502537    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.502627    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:15.536073    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:06:15.536152    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:06:15.555893    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:06:15.555952    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 02:06:15.576096    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:06:15.576155    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 02:06:15.595956    3951 provision.go:87] duration metric: took 165.243542ms to configureAuth
	I0917 02:06:15.595981    3951 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:06:15.596163    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:15.596186    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:15.596327    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.596414    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.596502    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.596587    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.596672    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.596795    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.596928    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.596935    3951 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:06:15.651820    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:06:15.651831    3951 buildroot.go:70] root file system type: tmpfs
	I0917 02:06:15.651920    3951 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:06:15.651934    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.652065    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.652168    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.652259    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.652347    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.652479    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.652616    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.652659    3951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:06:15.717812    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:06:15.717834    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:15.717968    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:15.718062    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.718155    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:15.718250    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:15.718387    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:15.718524    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:15.718536    3951 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:06:17.394959    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:06:17.394973    3951 machine.go:96] duration metric: took 13.168443896s to provisionDockerMachine
	I0917 02:06:17.394995    3951 start.go:293] postStartSetup for "ha-857000" (driver="hyperkit")
	I0917 02:06:17.395004    3951 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:06:17.395018    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.395227    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:06:17.395243    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.395347    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.395465    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.395565    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.395656    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:17.438838    3951 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:06:17.443638    3951 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:06:17.443658    3951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:06:17.443750    3951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:06:17.443904    3951 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:06:17.443911    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:06:17.444089    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:06:17.451612    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:06:17.483402    3951 start.go:296] duration metric: took 88.39524ms for postStartSetup
	I0917 02:06:17.483429    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.483612    3951 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:06:17.483623    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.483710    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.483808    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.483897    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.483966    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:17.517140    3951 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:06:17.517209    3951 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:06:17.552751    3951 fix.go:56] duration metric: took 13.528816727s for fixHost
	I0917 02:06:17.552773    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.552913    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.553026    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.553112    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.553196    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.553326    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:17.553466    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:06:17.553473    3951 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:06:17.609371    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726563977.697638270
	
	I0917 02:06:17.609383    3951 fix.go:216] guest clock: 1726563977.697638270
	I0917 02:06:17.609388    3951 fix.go:229] Guest: 2024-09-17 02:06:17.69763827 -0700 PDT Remote: 2024-09-17 02:06:17.552764 -0700 PDT m=+13.948274598 (delta=144.87427ms)
	I0917 02:06:17.609406    3951 fix.go:200] guest clock delta is within tolerance: 144.87427ms
	I0917 02:06:17.609410    3951 start.go:83] releasing machines lock for "ha-857000", held for 13.585495629s
	I0917 02:06:17.609431    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.609563    3951 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:06:17.609665    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.609955    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.610053    3951 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:06:17.610139    3951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:06:17.610168    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.610194    3951 ssh_runner.go:195] Run: cat /version.json
	I0917 02:06:17.610206    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:06:17.610247    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.610275    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:06:17.610357    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.610376    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:06:17.610500    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.610520    3951 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:06:17.610600    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:17.610622    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:06:17.697769    3951 ssh_runner.go:195] Run: systemctl --version
	I0917 02:06:17.702709    3951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 02:06:17.706848    3951 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:06:17.706892    3951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:06:17.718886    3951 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:06:17.718900    3951 start.go:495] detecting cgroup driver to use...
	I0917 02:06:17.719004    3951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:06:17.737294    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:06:17.746145    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:06:17.754878    3951 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:06:17.754923    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:06:17.763740    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:06:17.772496    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:06:17.781224    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:06:17.790031    3951 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:06:17.799078    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:06:17.808154    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:06:17.817191    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:06:17.826325    3951 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:06:17.834538    3951 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:06:17.842770    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:17.944652    3951 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:06:17.962631    3951 start.go:495] detecting cgroup driver to use...
	I0917 02:06:17.962719    3951 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:06:17.974517    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:06:17.987421    3951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:06:18.001906    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:06:18.013186    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:06:18.024102    3951 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:06:18.045444    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:06:18.058849    3951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:06:18.073851    3951 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:06:18.076885    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:06:18.084040    3951 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:06:18.097595    3951 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:06:18.193717    3951 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:06:18.309886    3951 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:06:18.309951    3951 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:06:18.324367    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:18.418680    3951 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:06:20.733359    3951 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.314622452s)
	I0917 02:06:20.733433    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:06:20.744031    3951 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:06:20.756945    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:06:20.767405    3951 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:06:20.860682    3951 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:06:20.962907    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:21.070080    3951 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:06:21.083874    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:06:21.094971    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:21.190975    3951 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:06:21.258446    3951 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:06:21.258552    3951 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:06:21.262963    3951 start.go:563] Will wait 60s for crictl version
	I0917 02:06:21.263020    3951 ssh_runner.go:195] Run: which crictl
	I0917 02:06:21.266695    3951 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:06:21.293648    3951 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:06:21.293750    3951 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:06:21.309528    3951 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:06:21.349115    3951 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:06:21.349164    3951 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:06:21.349574    3951 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:06:21.354153    3951 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:06:21.363705    3951 kubeadm.go:883] updating cluster {Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 02:06:21.363793    3951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:06:21.363866    3951 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:06:21.378216    3951 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:06:21.378227    3951 docker.go:615] Images already preloaded, skipping extraction
	I0917 02:06:21.378310    3951 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:06:21.394015    3951 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:06:21.394037    3951 cache_images.go:84] Images are preloaded, skipping loading
	I0917 02:06:21.394050    3951 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0917 02:06:21.394124    3951 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:06:21.394209    3951 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 02:06:21.429497    3951 cni.go:84] Creating CNI manager for ""
	I0917 02:06:21.429509    3951 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 02:06:21.429523    3951 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 02:06:21.429538    3951 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-857000 NodeName:ha-857000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 02:06:21.429624    3951 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-857000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 02:06:21.429636    3951 kube-vip.go:115] generating kube-vip config ...
	I0917 02:06:21.429694    3951 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 02:06:21.442428    3951 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 02:06:21.442505    3951 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 02:06:21.442559    3951 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:06:21.451375    3951 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:06:21.451431    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 02:06:21.459648    3951 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0917 02:06:21.473122    3951 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:06:21.487014    3951 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0917 02:06:21.500992    3951 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 02:06:21.514562    3951 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:06:21.517444    3951 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:06:21.527518    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:21.625140    3951 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:06:21.639257    3951 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.5
	I0917 02:06:21.639269    3951 certs.go:194] generating shared ca certs ...
	I0917 02:06:21.639280    3951 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:21.639439    3951 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:06:21.639492    3951 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:06:21.639503    3951 certs.go:256] generating profile certs ...
	I0917 02:06:21.639592    3951 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key
	I0917 02:06:21.639611    3951 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea
	I0917 02:06:21.639646    3951 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt.34d356ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0917 02:06:21.706715    3951 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt.34d356ea ...
	I0917 02:06:21.706729    3951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt.34d356ea: {Name:mk3f381e64586a5cdd027dc403cd38b58de19cdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:21.707284    3951 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea ...
	I0917 02:06:21.707298    3951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea: {Name:mk7ad610a632f0df99198e2c9491ed57c1c9afa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:21.707543    3951 certs.go:381] copying /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt.34d356ea -> /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt
	I0917 02:06:21.707724    3951 certs.go:385] copying /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea -> /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key
	I0917 02:06:21.707940    3951 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key
	I0917 02:06:21.707949    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:06:21.707971    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:06:21.707989    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:06:21.708013    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:06:21.708032    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 02:06:21.708050    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 02:06:21.708068    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 02:06:21.708087    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 02:06:21.708175    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:06:21.708221    3951 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:06:21.708229    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:06:21.708259    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:06:21.708290    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:06:21.708317    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:06:21.708378    3951 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:06:21.708413    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:06:21.708433    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:06:21.708450    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:06:21.708936    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:06:21.731634    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:06:21.755014    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:06:21.780416    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:06:21.806686    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 02:06:21.830924    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 02:06:21.860517    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:06:21.883515    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 02:06:21.904458    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:06:21.940425    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:06:21.977748    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:06:22.031404    3951 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 02:06:22.066177    3951 ssh_runner.go:195] Run: openssl version
	I0917 02:06:22.070562    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:06:22.079011    3951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:06:22.082434    3951 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:06:22.082478    3951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:06:22.087894    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:06:22.096140    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:06:22.104466    3951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:06:22.107959    3951 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:06:22.107997    3951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:06:22.112345    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:06:22.120730    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:06:22.129508    3951 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:06:22.133071    3951 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:06:22.133113    3951 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:06:22.137371    3951 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:06:22.145685    3951 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:06:22.149175    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:06:22.154152    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:06:22.158697    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:06:22.163258    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:06:22.167625    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:06:22.172054    3951 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:06:22.176282    3951 kubeadm.go:392] StartCluster: {Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:06:22.176426    3951 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 02:06:22.189260    3951 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 02:06:22.196889    3951 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 02:06:22.196900    3951 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 02:06:22.196943    3951 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 02:06:22.204529    3951 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:06:22.204834    3951 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-857000" does not appear in /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:06:22.204922    3951 kubeconfig.go:62] /Users/jenkins/minikube-integration/19648-1025/kubeconfig needs updating (will repair): [kubeconfig missing "ha-857000" cluster setting kubeconfig missing "ha-857000" context setting]
	I0917 02:06:22.205131    3951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:22.205534    3951 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:06:22.205732    3951 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3bff720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:06:22.206065    3951 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 02:06:22.206259    3951 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 02:06:22.213498    3951 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0917 02:06:22.213509    3951 kubeadm.go:597] duration metric: took 16.605221ms to restartPrimaryControlPlane
	I0917 02:06:22.213515    3951 kubeadm.go:394] duration metric: took 37.238807ms to StartCluster
	I0917 02:06:22.213523    3951 settings.go:142] acquiring lock: {Name:mk5de70c670a23cf49fdbd19ddb5c1d2a9de9a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:22.213600    3951 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:06:22.213968    3951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:06:22.214179    3951 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:06:22.214192    3951 start.go:241] waiting for startup goroutines ...
	I0917 02:06:22.214206    3951 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 02:06:22.214324    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:22.256356    3951 out.go:177] * Enabled addons: 
	I0917 02:06:22.277195    3951 addons.go:510] duration metric: took 62.984897ms for enable addons: enabled=[]
	I0917 02:06:22.277282    3951 start.go:246] waiting for cluster config update ...
	I0917 02:06:22.277295    3951 start.go:255] writing updated cluster config ...
	I0917 02:06:22.300377    3951 out.go:201] 
	I0917 02:06:22.321646    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:22.321775    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:22.343932    3951 out.go:177] * Starting "ha-857000-m02" control-plane node in "ha-857000" cluster
	I0917 02:06:22.386310    3951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:06:22.386345    3951 cache.go:56] Caching tarball of preloaded images
	I0917 02:06:22.386520    3951 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:06:22.386539    3951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:06:22.386678    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:22.387665    3951 start.go:360] acquireMachinesLock for ha-857000-m02: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:06:22.387781    3951 start.go:364] duration metric: took 93.188µs to acquireMachinesLock for "ha-857000-m02"
	I0917 02:06:22.387807    3951 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:06:22.387815    3951 fix.go:54] fixHost starting: m02
	I0917 02:06:22.388245    3951 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:06:22.388280    3951 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:06:22.397656    3951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51937
	I0917 02:06:22.397993    3951 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:06:22.398338    3951 main.go:141] libmachine: Using API Version  1
	I0917 02:06:22.398355    3951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:06:22.398604    3951 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:06:22.398732    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:22.398839    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetState
	I0917 02:06:22.398926    3951 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:22.398995    3951 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 3905
	I0917 02:06:22.399925    3951 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid 3905 missing from process table
	I0917 02:06:22.399987    3951 fix.go:112] recreateIfNeeded on ha-857000-m02: state=Stopped err=<nil>
	I0917 02:06:22.400002    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	W0917 02:06:22.400097    3951 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:06:22.442146    3951 out.go:177] * Restarting existing hyperkit VM for "ha-857000-m02" ...
	I0917 02:06:22.463239    3951 main.go:141] libmachine: (ha-857000-m02) Calling .Start
	I0917 02:06:22.463548    3951 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:22.463605    3951 main.go:141] libmachine: (ha-857000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid
	I0917 02:06:22.465343    3951 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid 3905 missing from process table
	I0917 02:06:22.465354    3951 main.go:141] libmachine: (ha-857000-m02) DBG | pid 3905 is in state "Stopped"
	I0917 02:06:22.465372    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid...
	I0917 02:06:22.465746    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Using UUID 1940a045-0b1b-4b28-842d-d4858a62cbd3
	I0917 02:06:22.495548    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Generated MAC 9a:95:4e:4b:65:fe
	I0917 02:06:22.495583    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:06:22.495857    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1940a045-0b1b-4b28-842d-d4858a62cbd3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000383560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:06:22.495910    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1940a045-0b1b-4b28-842d-d4858a62cbd3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000383560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:06:22.496018    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1940a045-0b1b-4b28-842d-d4858a62cbd3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/ha-857000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machine
s/ha-857000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:06:22.496120    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1940a045-0b1b-4b28-842d-d4858a62cbd3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/ha-857000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:06:22.496143    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:06:22.497973    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 DEBUG: hyperkit: Pid is 3976
	I0917 02:06:22.498454    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Attempt 0
	I0917 02:06:22.498484    3951 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:06:22.498545    3951 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 3976
	I0917 02:06:22.500282    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Searching for 9a:95:4e:4b:65:fe in /var/db/dhcpd_leases ...
	I0917 02:06:22.500349    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:06:22.500372    3951 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea9805}
	I0917 02:06:22.500382    3951 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94661}
	I0917 02:06:22.500397    3951 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea97be}
	I0917 02:06:22.500410    3951 main.go:141] libmachine: (ha-857000-m02) DBG | Found match: 9a:95:4e:4b:65:fe
	I0917 02:06:22.500437    3951 main.go:141] libmachine: (ha-857000-m02) DBG | IP: 192.169.0.6
	I0917 02:06:22.500486    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetConfigRaw
	I0917 02:06:22.501123    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:06:22.501362    3951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:06:22.501877    3951 machine.go:93] provisionDockerMachine start ...
	I0917 02:06:22.501887    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:22.502006    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:22.502140    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:22.502253    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:22.502355    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:22.502453    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:22.502592    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:22.502794    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:22.502804    3951 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:06:22.506011    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:06:22.516718    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:06:22.517536    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:06:22.517559    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:06:22.517587    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:06:22.517605    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:06:22.902525    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:06:22.902540    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:06:23.017245    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:06:23.017263    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:06:23.017272    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:06:23.017286    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:06:23.018137    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:06:23.018146    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:06:28.664665    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:06:28.664731    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:06:28.664739    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:06:28.688834    3951 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:06:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:06:33.560885    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:06:33.560902    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:06:33.561080    3951 buildroot.go:166] provisioning hostname "ha-857000-m02"
	I0917 02:06:33.561088    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:06:33.561176    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.561264    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.561361    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.561457    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.561572    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.561724    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.561884    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.561894    3951 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000-m02 && echo "ha-857000-m02" | sudo tee /etc/hostname
	I0917 02:06:33.626435    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000-m02
	
	I0917 02:06:33.626450    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.626583    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.626692    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.626783    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.626875    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.627027    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.627173    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.627184    3951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:06:33.685124    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:06:33.685140    3951 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:06:33.685149    3951 buildroot.go:174] setting up certificates
	I0917 02:06:33.685155    3951 provision.go:84] configureAuth start
	I0917 02:06:33.685161    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:06:33.685285    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:06:33.685391    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.685472    3951 provision.go:143] copyHostCerts
	I0917 02:06:33.685505    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:06:33.685552    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:06:33.685558    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:06:33.685701    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:06:33.686213    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:06:33.686248    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:06:33.686252    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:06:33.686328    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:06:33.686464    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:06:33.686504    3951 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:06:33.686509    3951 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:06:33.686577    3951 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:06:33.686713    3951 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000-m02 san=[127.0.0.1 192.169.0.6 ha-857000-m02 localhost minikube]
	I0917 02:06:33.724325    3951 provision.go:177] copyRemoteCerts
	I0917 02:06:33.724374    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:06:33.724388    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.724531    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.724628    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.724718    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.724808    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:06:33.757977    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:06:33.758053    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 02:06:33.777137    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:06:33.777203    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:06:33.796184    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:06:33.796248    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:06:33.815739    3951 provision.go:87] duration metric: took 130.575095ms to configureAuth
	I0917 02:06:33.815753    3951 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:06:33.815923    3951 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:06:33.815937    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:33.816066    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.816180    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.816266    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.816357    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.816435    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.816546    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.816672    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.816679    3951 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:06:33.868528    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:06:33.868540    3951 buildroot.go:70] root file system type: tmpfs
	I0917 02:06:33.868626    3951 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:06:33.868638    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.868774    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.868862    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.868957    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.869038    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.869178    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.869313    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.869355    3951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:06:33.934180    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:06:33.934199    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:33.934331    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:33.934438    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.934537    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:33.934624    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:33.934753    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:33.934890    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:33.934902    3951 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:06:35.613474    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:06:35.613490    3951 machine.go:96] duration metric: took 13.111377814s to provisionDockerMachine
	I0917 02:06:35.613498    3951 start.go:293] postStartSetup for "ha-857000-m02" (driver="hyperkit")
	I0917 02:06:35.613517    3951 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:06:35.613531    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.613729    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:06:35.613743    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:35.613853    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.613946    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.614026    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.614114    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:06:35.652452    3951 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:06:35.656174    3951 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:06:35.656186    3951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:06:35.656273    3951 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:06:35.656413    3951 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:06:35.656420    3951 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:06:35.656581    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:06:35.665638    3951 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:06:35.696288    3951 start.go:296] duration metric: took 82.770634ms for postStartSetup
	I0917 02:06:35.696319    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.696511    3951 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:06:35.696525    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:35.696625    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.696706    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.696794    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.696893    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:06:35.729642    3951 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:06:35.729708    3951 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:06:35.783199    3951 fix.go:56] duration metric: took 13.395150311s for fixHost
	I0917 02:06:35.783224    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:35.783375    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.783476    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.783551    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.783631    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.783768    3951 main.go:141] libmachine: Using SSH client type: native
	I0917 02:06:35.783899    3951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2529820] 0x252c500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:06:35.783906    3951 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:06:35.838274    3951 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726563995.926909320
	
	I0917 02:06:35.838288    3951 fix.go:216] guest clock: 1726563995.926909320
	I0917 02:06:35.838293    3951 fix.go:229] Guest: 2024-09-17 02:06:35.92690932 -0700 PDT Remote: 2024-09-17 02:06:35.783213 -0700 PDT m=+32.178408818 (delta=143.69632ms)
	I0917 02:06:35.838302    3951 fix.go:200] guest clock delta is within tolerance: 143.69632ms
	I0917 02:06:35.838306    3951 start.go:83] releasing machines lock for "ha-857000-m02", held for 13.450280733s
	I0917 02:06:35.838324    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.838459    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:06:35.861800    3951 out.go:177] * Found network options:
	I0917 02:06:35.882860    3951 out.go:177]   - NO_PROXY=192.169.0.5
	W0917 02:06:35.903716    3951 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:06:35.903755    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.904608    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.904879    3951 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:06:35.905023    3951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:06:35.905064    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	W0917 02:06:35.905084    3951 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:06:35.905192    3951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:06:35.905211    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:06:35.905229    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.905436    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:06:35.905470    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.905665    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:06:35.905679    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.905849    3951 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:06:35.905865    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:06:35.905991    3951 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	W0917 02:06:35.936887    3951 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:06:35.936958    3951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:06:36.007933    3951 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:06:36.007953    3951 start.go:495] detecting cgroup driver to use...
	I0917 02:06:36.008056    3951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:06:36.024338    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:06:36.033262    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:06:36.042136    3951 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:06:36.042188    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:06:36.050818    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:06:36.059619    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:06:36.068394    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:06:36.077285    3951 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:06:36.086317    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:06:36.094948    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:06:36.103691    3951 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:06:36.112538    3951 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:06:36.120508    3951 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:06:36.128434    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:36.230022    3951 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:06:36.250428    3951 start.go:495] detecting cgroup driver to use...
	I0917 02:06:36.250505    3951 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:06:36.273190    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:06:36.285496    3951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:06:36.303235    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:06:36.314994    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:06:36.325990    3951 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:06:36.351133    3951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:06:36.362290    3951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:06:36.377230    3951 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:06:36.380093    3951 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:06:36.387911    3951 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:06:36.401199    3951 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:06:36.507714    3951 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:06:36.609258    3951 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:06:36.609285    3951 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:06:36.623332    3951 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:06:36.718880    3951 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:07:37.748739    3951 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.028781405s)
	I0917 02:07:37.748815    3951 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0917 02:07:37.786000    3951 out.go:201] 
	W0917 02:07:37.809190    3951 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 09:06:34 ha-857000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 09:06:34 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:34.324120961Z" level=info msg="Starting up"
	Sep 17 09:06:34 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:34.324775253Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 09:06:34 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:34.325518826Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=488
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.341058185Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356213648Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356261078Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356303349Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356313782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356436154Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356475371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356593098Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356628148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356640458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356648167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356767218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.356926440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358525862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358564683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358679405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358712925Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358797431Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.358843725Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.360911977Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.360974504Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361053471Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361068314Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361078324Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361121426Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361365784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361471567Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361506271Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361517719Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361527110Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361535526Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361543621Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361552701Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361562674Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361570939Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361578985Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361588503Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361603316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361612406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361620269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361628602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361638647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361646859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361654306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361662885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361671295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361681400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361690597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361698250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361705966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361720758Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361737654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361746364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361754112Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361847279Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361861726Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361869503Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361877991Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361885443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361899338Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.361911740Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362480967Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362549430Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362632268Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 09:06:34 ha-857000-m02 dockerd[488]: time="2024-09-17T09:06:34.362920029Z" level=info msg="containerd successfully booted in 0.022632s"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.344850604Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.385337180Z" level=info msg="Loading containers: start."
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.568192740Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.627785197Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.670471622Z" level=info msg="Loading containers: done."
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.677239663Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.677408183Z" level=info msg="Daemon has completed initialization"
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.699597178Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 09:06:35 ha-857000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 17 09:06:35 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:35.704823863Z" level=info msg="API listen on [::]:2376"
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.821530126Z" level=info msg="Processing signal 'terminated'"
	Sep 17 09:06:36 ha-857000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.822577679Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.823011519Z" level=info msg="Daemon shutdown complete"
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.823037716Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 09:06:36 ha-857000-m02 dockerd[481]: time="2024-09-17T09:06:36.823053677Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 09:06:37 ha-857000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 09:06:37 ha-857000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 09:06:37 ha-857000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 09:06:37 ha-857000-m02 dockerd[1158]: time="2024-09-17T09:06:37.864990112Z" level=info msg="Starting up"
	Sep 17 09:07:37 ha-857000-m02 dockerd[1158]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 09:07:37 ha-857000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 09:07:37 ha-857000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 09:07:37 ha-857000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0917 02:07:37.809292    3951 out.go:270] * 
	W0917 02:07:37.810458    3951 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:07:37.874286    3951 out.go:201] 
	
	
	==> Docker <==
	Sep 17 09:06:28 ha-857000 dockerd[1182]: time="2024-09-17T09:06:28.953788653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:06:50 ha-857000 dockerd[1176]: time="2024-09-17T09:06:50.570882674Z" level=info msg="ignoring event" container=6dbb2f7111cc445377bba802440bf8e10f56f3e5a0e88f69f41d840619ffa219 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 09:06:50 ha-857000 dockerd[1182]: time="2024-09-17T09:06:50.571701735Z" level=info msg="shim disconnected" id=6dbb2f7111cc445377bba802440bf8e10f56f3e5a0e88f69f41d840619ffa219 namespace=moby
	Sep 17 09:06:50 ha-857000 dockerd[1182]: time="2024-09-17T09:06:50.571758895Z" level=warning msg="cleaning up after shim disconnected" id=6dbb2f7111cc445377bba802440bf8e10f56f3e5a0e88f69f41d840619ffa219 namespace=moby
	Sep 17 09:06:50 ha-857000 dockerd[1182]: time="2024-09-17T09:06:50.571767359Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 09:06:51 ha-857000 dockerd[1176]: time="2024-09-17T09:06:51.580125433Z" level=info msg="ignoring event" container=f1252151601a7acad5aef9c02d49be638390576e96a0ad87836a6b56f813f5c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 09:06:51 ha-857000 dockerd[1182]: time="2024-09-17T09:06:51.581344041Z" level=info msg="shim disconnected" id=f1252151601a7acad5aef9c02d49be638390576e96a0ad87836a6b56f813f5c1 namespace=moby
	Sep 17 09:06:51 ha-857000 dockerd[1182]: time="2024-09-17T09:06:51.581601552Z" level=warning msg="cleaning up after shim disconnected" id=f1252151601a7acad5aef9c02d49be638390576e96a0ad87836a6b56f813f5c1 namespace=moby
	Sep 17 09:06:51 ha-857000 dockerd[1182]: time="2024-09-17T09:06:51.581639267Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.085279461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.085342970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.085355817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.085528340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.087547026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.087599271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.087608710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:07:13 ha-857000 dockerd[1182]: time="2024-09-17T09:07:13.087706284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:07:33 ha-857000 dockerd[1176]: time="2024-09-17T09:07:33.582121952Z" level=info msg="ignoring event" container=2d39a363ecf53c7e25c28e0097a1b3c6a6ee70ae1a6443746c0bee850e42b437 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 09:07:33 ha-857000 dockerd[1182]: time="2024-09-17T09:07:33.583058738Z" level=info msg="shim disconnected" id=2d39a363ecf53c7e25c28e0097a1b3c6a6ee70ae1a6443746c0bee850e42b437 namespace=moby
	Sep 17 09:07:33 ha-857000 dockerd[1182]: time="2024-09-17T09:07:33.583223961Z" level=warning msg="cleaning up after shim disconnected" id=2d39a363ecf53c7e25c28e0097a1b3c6a6ee70ae1a6443746c0bee850e42b437 namespace=moby
	Sep 17 09:07:33 ha-857000 dockerd[1182]: time="2024-09-17T09:07:33.583260138Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 09:07:34 ha-857000 dockerd[1176]: time="2024-09-17T09:07:34.599859784Z" level=info msg="ignoring event" container=5043e9bda2acca013e777653865484f1467f9346624b61dd9e0a64afb712c761 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 09:07:34 ha-857000 dockerd[1182]: time="2024-09-17T09:07:34.601045096Z" level=info msg="shim disconnected" id=5043e9bda2acca013e777653865484f1467f9346624b61dd9e0a64afb712c761 namespace=moby
	Sep 17 09:07:34 ha-857000 dockerd[1182]: time="2024-09-17T09:07:34.601095683Z" level=warning msg="cleaning up after shim disconnected" id=5043e9bda2acca013e777653865484f1467f9346624b61dd9e0a64afb712c761 namespace=moby
	Sep 17 09:07:34 ha-857000 dockerd[1182]: time="2024-09-17T09:07:34.601106271Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2d39a363ecf53       6bab7719df100                                                                                         31 seconds ago       Exited              kube-apiserver            2                   d1c62bd0a7eda       kube-apiserver-ha-857000
	5043e9bda2acc       175ffd71cce3d                                                                                         31 seconds ago       Exited              kube-controller-manager   2                   d830cb545033a       kube-controller-manager-ha-857000
	034279696db8f       38af8ddebf499                                                                                         About a minute ago   Running             kube-vip                  0                   4205e70bfa1bb       kube-vip-ha-857000
	d9fae1497b048       9aa1fad941575                                                                                         About a minute ago   Running             kube-scheduler            1                   37d9fe68f2e59       kube-scheduler-ha-857000
	f4f59b8c76404       2e96e5913fc06                                                                                         About a minute ago   Running             etcd                      1                   a23094a650513       etcd-ha-857000
	fe908ac73b00f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago        Exited              busybox                   0                   80864159ef38e       busybox-7dff88458-4jzg8
	521527f17691c       c69fa2e9cbf5f                                                                                         6 minutes ago        Exited              coredns                   0                   aa21641a5b16e       coredns-7c65d6cfc9-nl5j5
	f991c8e956d90       c69fa2e9cbf5f                                                                                         6 minutes ago        Exited              coredns                   0                   da08087b51cd9       coredns-7c65d6cfc9-fg65r
	611759af4bf7a       6e38f40d628db                                                                                         6 minutes ago        Exited              storage-provisioner       0                   08dee0a668f3d       storage-provisioner
	5d84a01abd3e7       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              6 minutes ago        Exited              kindnet-cni               0                   38db6fab73655       kindnet-7pf7v
	0b03e5e488939       60c005f310ff3                                                                                         6 minutes ago        Exited              kube-proxy                0                   067bc1b2ad7fa       kube-proxy-vskbj
	fcb7038a6ac9e       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago        Exited              kube-vip                  0                   b74867bd31c54       kube-vip-ha-857000
	2da1b67c167c6       9aa1fad941575                                                                                         6 minutes ago        Exited              kube-scheduler            0                   f2b2b320ed41a       kube-scheduler-ha-857000
	6989933ec650e       2e96e5913fc06                                                                                         6 minutes ago        Exited              etcd                      0                   43536bf53cbec       etcd-ha-857000
	
	
	==> coredns [521527f17691] <==
	[INFO] 10.244.2.2:33230 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100028s
	[INFO] 10.244.2.2:37727 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077614s
	[INFO] 10.244.2.2:51233 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090375s
	[INFO] 10.244.1.2:43082 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115984s
	[INFO] 10.244.1.2:45048 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000071244s
	[INFO] 10.244.1.2:48877 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106601s
	[INFO] 10.244.1.2:59235 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000068348s
	[INFO] 10.244.1.2:53808 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064222s
	[INFO] 10.244.1.2:54982 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064992s
	[INFO] 10.244.0.4:59177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012236s
	[INFO] 10.244.0.4:59468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096608s
	[INFO] 10.244.0.4:49953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108018s
	[INFO] 10.244.2.2:36658 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081427s
	[INFO] 10.244.1.2:53166 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140458s
	[INFO] 10.244.1.2:60442 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069729s
	[INFO] 10.244.0.4:60564 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007076s
	[INFO] 10.244.0.4:57696 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000125726s
	[INFO] 10.244.2.2:33447 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114855s
	[INFO] 10.244.2.2:49647 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000058138s
	[INFO] 10.244.2.2:55869 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00009725s
	[INFO] 10.244.1.2:49826 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096631s
	[INFO] 10.244.1.2:33376 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000046366s
	[INFO] 10.244.1.2:47184 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000084276s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f991c8e956d9] <==
	[INFO] 10.244.1.2:36169 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206963s
	[INFO] 10.244.1.2:33814 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000088589s
	[INFO] 10.244.1.2:57385 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.000535008s
	[INFO] 10.244.0.4:54856 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135529s
	[INFO] 10.244.0.4:47831 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.019088159s
	[INFO] 10.244.0.4:46325 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201714s
	[INFO] 10.244.0.4:45239 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000255383s
	[INFO] 10.244.0.4:55042 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000141827s
	[INFO] 10.244.2.2:47888 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108401s
	[INFO] 10.244.2.2:41486 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00044994s
	[INFO] 10.244.2.2:50623 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082841s
	[INFO] 10.244.1.2:54143 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098038s
	[INFO] 10.244.1.2:38802 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046632s
	[INFO] 10.244.0.4:39532 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.002579505s
	[INFO] 10.244.2.2:53978 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077749s
	[INFO] 10.244.2.2:60710 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092889s
	[INFO] 10.244.2.2:51255 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000044117s
	[INFO] 10.244.1.2:36996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056219s
	[INFO] 10.244.1.2:39487 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090704s
	[INFO] 10.244.0.4:39831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131192s
	[INFO] 10.244.0.4:35770 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154922s
	[INFO] 10.244.2.2:45820 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113973s
	[INFO] 10.244.1.2:44519 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120184s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0917 09:07:44.846237    2950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 09:07:44.848300    2950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 09:07:44.850436    2950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 09:07:44.852392    2950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0917 09:07:44.854238    2950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035496] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007916] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.708278] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007008] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.718295] systemd-fstab-generator[126]: Ignoring "noauto" option for root device
	[  +2.225909] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.454376] systemd-fstab-generator[462]: Ignoring "noauto" option for root device
	[  +0.098861] systemd-fstab-generator[474]: Ignoring "noauto" option for root device
	[  +1.963292] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +0.257545] systemd-fstab-generator[1142]: Ignoring "noauto" option for root device
	[  +0.117262] systemd-fstab-generator[1154]: Ignoring "noauto" option for root device
	[  +0.053463] kauditd_printk_skb: 123 callbacks suppressed
	[  +0.056651] systemd-fstab-generator[1168]: Ignoring "noauto" option for root device
	[  +2.442306] systemd-fstab-generator[1383]: Ignoring "noauto" option for root device
	[  +0.098086] systemd-fstab-generator[1395]: Ignoring "noauto" option for root device
	[  +0.113966] systemd-fstab-generator[1407]: Ignoring "noauto" option for root device
	[  +0.114036] systemd-fstab-generator[1422]: Ignoring "noauto" option for root device
	[  +0.434156] systemd-fstab-generator[1582]: Ignoring "noauto" option for root device
	[  +6.997669] kauditd_printk_skb: 190 callbacks suppressed
	[ +21.952863] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [6989933ec650] <==
	2024/09/17 09:05:55 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T09:05:55.944117Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.248286841s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T09:05:55.944127Z","caller":"traceutil/trace.go:171","msg":"trace[182147551] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; }","duration":"5.248299338s","start":"2024-09-17T09:05:50.695825Z","end":"2024-09-17T09:05:55.944124Z","steps":["trace[182147551] 'agreement among raft nodes before linearized reading'  (duration: 5.248286916s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T09:05:55.944136Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T09:05:50.695789Z","time spent":"5.248344269s","remote":"127.0.0.1:52050","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":0,"response size":0,"request content":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true "}
	2024/09/17 09:05:55 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T09:05:55.984755Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T09:05:55.984786Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-17T09:05:55.984817Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-17T09:05:55.987724Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.987747Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.988090Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.988144Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.988200Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.988245Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.988255Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ff70cdb626651bff"}
	{"level":"info","ts":"2024-09-17T09:05:55.988259Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.988265Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.988292Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.988663Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.988686Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.988708Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.988717Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:05:55.991208Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-17T09:05:55.991249Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-17T09:05:55.991256Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-857000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [f4f59b8c7640] <==
	{"level":"info","ts":"2024-09-17T09:07:40.670344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:40.670354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:40.670360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:41.971244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:41.971287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:41.971302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:41.971320Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:41.971331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:43.270305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:43.270346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:43.270361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:43.270380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:43.270403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"warn","ts":"2024-09-17T09:07:43.689955Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275666,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:07:44.191157Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275666,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:07:44.236208Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ff70cdb626651bff","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:07:44.236250Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ff70cdb626651bff","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:07:44.248681Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-17T09:07:44.248702Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: no route to host"}
	{"level":"info","ts":"2024-09-17T09:07:44.570369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:44.570454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:44.570473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:44.570492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:07:44.570502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"warn","ts":"2024-09-17T09:07:44.691450Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275666,"retry-timeout":"500ms"}
	
	
	==> kernel <==
	 09:07:45 up 1 min,  0 users,  load average: 0.30, 0.12, 0.04
	Linux ha-857000 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5d84a01abd3e] <==
	I0917 09:05:22.964948       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:32.966280       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:32.966503       1 main.go:299] handling current node
	I0917 09:05:32.966605       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:32.966739       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:32.966951       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:32.967059       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:32.967333       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:32.967449       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:42.964585       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:42.964999       1 main.go:299] handling current node
	I0917 09:05:42.965252       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:42.965422       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:42.965746       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:42.965829       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:42.966204       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:42.966357       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:52.965279       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:52.965376       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:52.965533       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:52.965592       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:52.965673       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:52.965753       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:52.965812       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:52.965902       1 main.go:299] handling current node
	
	
	==> kube-apiserver [2d39a363ecf5] <==
	I0917 09:07:13.208670       1 options.go:228] external host was not specified, using 192.169.0.5
	I0917 09:07:13.210069       1 server.go:142] Version: v1.31.1
	I0917 09:07:13.210101       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:07:13.559096       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0917 09:07:13.563623       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0917 09:07:13.563675       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0917 09:07:13.563832       1 instance.go:232] Using reconciler: lease
	I0917 09:07:13.564198       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0917 09:07:33.559987       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0917 09:07:33.560076       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0917 09:07:33.564856       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0917 09:07:33.565081       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [5043e9bda2ac] <==
	I0917 09:07:13.602574       1 serving.go:386] Generated self-signed cert in-memory
	I0917 09:07:13.965025       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0917 09:07:13.965059       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:07:13.966263       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0917 09:07:13.966394       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 09:07:13.966278       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 09:07:13.966269       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0917 09:07:34.581719       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-proxy [0b03e5e48893] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 09:00:59.069869       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 09:00:59.079118       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 09:00:59.079199       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 09:00:59.109184       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 09:00:59.109227       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 09:00:59.109245       1 server_linux.go:169] "Using iptables Proxier"
	I0917 09:00:59.111661       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 09:00:59.111847       1 server.go:483] "Version info" version="v1.31.1"
	I0917 09:00:59.111876       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:00:59.112952       1 config.go:199] "Starting service config controller"
	I0917 09:00:59.112979       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 09:00:59.112995       1 config.go:105] "Starting endpoint slice config controller"
	I0917 09:00:59.112998       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 09:00:59.113603       1 config.go:328] "Starting node config controller"
	I0917 09:00:59.113673       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 09:00:59.213587       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 09:00:59.213649       1 shared_informer.go:320] Caches are synced for service config
	I0917 09:00:59.213808       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2da1b67c167c] <==
	E0917 09:03:33.866320       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 887505bd-cf68-4e77-be17-99550df4b4b4(default/busybox-7dff88458-4jzg8) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-4jzg8"
	E0917 09:03:33.866475       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4jzg8\": pod busybox-7dff88458-4jzg8 is already assigned to node \"ha-857000\"" pod="default/busybox-7dff88458-4jzg8"
	I0917 09:03:33.866490       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-4jzg8" node="ha-857000"
	E0917 09:03:33.876570       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5x9l8\": pod busybox-7dff88458-5x9l8 is already assigned to node \"ha-857000-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-5x9l8" node="ha-857000-m03"
	E0917 09:03:33.876627       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod dfc21081-4b44-4f15-9713-8dbd1797a985(default/busybox-7dff88458-5x9l8) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-5x9l8"
	E0917 09:03:33.876641       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5x9l8\": pod busybox-7dff88458-5x9l8 is already assigned to node \"ha-857000-m03\"" pod="default/busybox-7dff88458-5x9l8"
	I0917 09:03:33.876653       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-5x9l8" node="ha-857000-m03"
	E0917 09:04:05.799466       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-zchkt\": pod kube-proxy-zchkt is already assigned to node \"ha-857000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-zchkt" node="ha-857000-m04"
	E0917 09:04:05.799587       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c03ed58b-9571-4d9e-bb6b-c12332f7766a(kube-system/kube-proxy-zchkt) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-zchkt"
	E0917 09:04:05.799651       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-zchkt\": pod kube-proxy-zchkt is already assigned to node \"ha-857000-m04\"" pod="kube-system/kube-proxy-zchkt"
	I0917 09:04:05.799843       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-zchkt" node="ha-857000-m04"
	E0917 09:04:05.810597       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4jk9v\": pod kindnet-4jk9v is already assigned to node \"ha-857000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4jk9v" node="ha-857000-m04"
	E0917 09:04:05.810752       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 24a018c6-9cbb-4d17-a295-8fef456534a0(kube-system/kindnet-4jk9v) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4jk9v"
	E0917 09:04:05.811044       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4jk9v\": pod kindnet-4jk9v is already assigned to node \"ha-857000-m04\"" pod="kube-system/kindnet-4jk9v"
	I0917 09:04:05.811236       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4jk9v" node="ha-857000-m04"
	E0917 09:04:05.816361       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-q5f2s\": pod kindnet-q5f2s is already assigned to node \"ha-857000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-q5f2s" node="ha-857000-m04"
	E0917 09:04:05.816486       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-q5f2s\": pod kindnet-q5f2s is already assigned to node \"ha-857000-m04\"" pod="kube-system/kindnet-q5f2s"
	E0917 09:04:05.829276       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tbbh2\": pod kindnet-tbbh2 is already assigned to node \"ha-857000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-tbbh2" node="ha-857000-m04"
	E0917 09:04:05.829463       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a3360b43-cfb5-45f5-9de3-cb8bfd82ac14(kube-system/kindnet-tbbh2) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-tbbh2"
	E0917 09:04:05.829578       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tbbh2\": pod kindnet-tbbh2 is already assigned to node \"ha-857000-m04\"" pod="kube-system/kindnet-tbbh2"
	I0917 09:04:05.829611       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-tbbh2" node="ha-857000-m04"
	I0917 09:05:55.853932       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 09:05:55.858618       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0917 09:05:55.858815       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 09:05:55.881585       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d9fae1497b04] <==
	E0917 09:07:31.901651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 09:07:32.071397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0917 09:07:32.071649       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0917 09:07:34.580116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33886->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.580206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33886->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.580455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42496->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.580535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42496->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.580863       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33904->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.580996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33904->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.581303       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33898->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.581360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33898->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.581744       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33862->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.582050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33862->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.582256       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33854->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.582356       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33854->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.582989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42470->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.583036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42470->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.583221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42488->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.583273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42488->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.583499       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33884->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.583538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:33884->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.583720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42464->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.583760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42464->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0917 09:07:34.583992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42512->192.169.0.5:8443: read: connection reset by peer
	E0917 09:07:34.584033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:42512->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 17 09:07:33 ha-857000 kubelet[1589]: I0917 09:07:33.851330    1589 scope.go:117] "RemoveContainer" containerID="2d39a363ecf53c7e25c28e0097a1b3c6a6ee70ae1a6443746c0bee850e42b437"
	Sep 17 09:07:33 ha-857000 kubelet[1589]: E0917 09:07:33.851432    1589 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-857000_kube-system(f9e4594843635b7ed6662a3474d619e9)\"" pod="kube-system/kube-apiserver-ha-857000" podUID="f9e4594843635b7ed6662a3474d619e9"
	Sep 17 09:07:34 ha-857000 kubelet[1589]: I0917 09:07:34.867010    1589 scope.go:117] "RemoveContainer" containerID="f1252151601a7acad5aef9c02d49be638390576e96a0ad87836a6b56f813f5c1"
	Sep 17 09:07:34 ha-857000 kubelet[1589]: I0917 09:07:34.868149    1589 scope.go:117] "RemoveContainer" containerID="5043e9bda2acca013e777653865484f1467f9346624b61dd9e0a64afb712c761"
	Sep 17 09:07:34 ha-857000 kubelet[1589]: E0917 09:07:34.868227    1589 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857000_kube-system(2e359dfc5a9c04b45f1f4ad5b0c126ca)\"" pod="kube-system/kube-controller-manager-ha-857000" podUID="2e359dfc5a9c04b45f1f4ad5b0c126ca"
	Sep 17 09:07:35 ha-857000 kubelet[1589]: E0917 09:07:35.692107    1589 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-857000.17f5fccb3d09c90c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-857000,UID:ha-857000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-857000,},FirstTimestamp:2024-09-17 09:06:21.999065356 +0000 UTC m=+0.224912397,LastTimestamp:2024-09-17 09:06:21.999065356 +0000 UTC m=+0.224912397,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-857000,}"
	Sep 17 09:07:36 ha-857000 kubelet[1589]: I0917 09:07:36.557057    1589 kubelet_node_status.go:72] "Attempting to register node" node="ha-857000"
	Sep 17 09:07:37 ha-857000 kubelet[1589]: I0917 09:07:37.880843    1589 scope.go:117] "RemoveContainer" containerID="5043e9bda2acca013e777653865484f1467f9346624b61dd9e0a64afb712c761"
	Sep 17 09:07:37 ha-857000 kubelet[1589]: E0917 09:07:37.881410    1589 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857000_kube-system(2e359dfc5a9c04b45f1f4ad5b0c126ca)\"" pod="kube-system/kube-controller-manager-ha-857000" podUID="2e359dfc5a9c04b45f1f4ad5b0c126ca"
	Sep 17 09:07:38 ha-857000 kubelet[1589]: E0917 09:07:38.763996    1589 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-857000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 17 09:07:38 ha-857000 kubelet[1589]: W0917 09:07:38.764000    1589 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 17 09:07:38 ha-857000 kubelet[1589]: E0917 09:07:38.764047    1589 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-857000"
	Sep 17 09:07:38 ha-857000 kubelet[1589]: E0917 09:07:38.764044    1589 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 17 09:07:38 ha-857000 kubelet[1589]: I0917 09:07:38.848033    1589 scope.go:117] "RemoveContainer" containerID="2d39a363ecf53c7e25c28e0097a1b3c6a6ee70ae1a6443746c0bee850e42b437"
	Sep 17 09:07:38 ha-857000 kubelet[1589]: E0917 09:07:38.848255    1589 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-857000_kube-system(f9e4594843635b7ed6662a3474d619e9)\"" pod="kube-system/kube-apiserver-ha-857000" podUID="f9e4594843635b7ed6662a3474d619e9"
	Sep 17 09:07:40 ha-857000 kubelet[1589]: I0917 09:07:40.089264    1589 scope.go:117] "RemoveContainer" containerID="5043e9bda2acca013e777653865484f1467f9346624b61dd9e0a64afb712c761"
	Sep 17 09:07:40 ha-857000 kubelet[1589]: E0917 09:07:40.089464    1589 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857000_kube-system(2e359dfc5a9c04b45f1f4ad5b0c126ca)\"" pod="kube-system/kube-controller-manager-ha-857000" podUID="2e359dfc5a9c04b45f1f4ad5b0c126ca"
	Sep 17 09:07:40 ha-857000 kubelet[1589]: I0917 09:07:40.554783    1589 scope.go:117] "RemoveContainer" containerID="2d39a363ecf53c7e25c28e0097a1b3c6a6ee70ae1a6443746c0bee850e42b437"
	Sep 17 09:07:40 ha-857000 kubelet[1589]: E0917 09:07:40.554930    1589 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-857000_kube-system(f9e4594843635b7ed6662a3474d619e9)\"" pod="kube-system/kube-apiserver-ha-857000" podUID="f9e4594843635b7ed6662a3474d619e9"
	Sep 17 09:07:41 ha-857000 kubelet[1589]: W0917 09:07:41.836670    1589 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-857000&limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 17 09:07:41 ha-857000 kubelet[1589]: W0917 09:07:41.836670    1589 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 17 09:07:41 ha-857000 kubelet[1589]: E0917 09:07:41.836720    1589 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-857000&limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 17 09:07:41 ha-857000 kubelet[1589]: E0917 09:07:41.836737    1589 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 17 09:07:42 ha-857000 kubelet[1589]: E0917 09:07:42.091554    1589 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-857000\" not found"
	Sep 17 09:07:45 ha-857000 kubelet[1589]: I0917 09:07:45.766034    1589 kubelet_node_status.go:72] "Attempting to register node" node="ha-857000"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-857000 -n ha-857000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-857000 -n ha-857000: exit status 2 (146.804212ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-857000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (160.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 stop -v=7 --alsologtostderr
E0917 02:08:59.151664    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:09:26.860113    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-857000 stop -v=7 --alsologtostderr: (2m40.776082359s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr: exit status 7 (101.619781ms)

                                                
                                                
-- stdout --
	ha-857000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-857000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-857000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-857000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:10:26.934397    4101 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:10:26.934691    4101 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:10:26.934696    4101 out.go:358] Setting ErrFile to fd 2...
	I0917 02:10:26.934700    4101 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:10:26.934880    4101 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:10:26.935073    4101 out.go:352] Setting JSON to false
	I0917 02:10:26.935095    4101 mustload.go:65] Loading cluster: ha-857000
	I0917 02:10:26.935146    4101 notify.go:220] Checking for updates...
	I0917 02:10:26.935424    4101 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:10:26.935437    4101 status.go:255] checking status of ha-857000 ...
	I0917 02:10:26.935864    4101 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:26.935903    4101 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:26.944665    4101 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52076
	I0917 02:10:26.945037    4101 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:26.945448    4101 main.go:141] libmachine: Using API Version  1
	I0917 02:10:26.945477    4101 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:26.945753    4101 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:26.945884    4101 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:10:26.945989    4101 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:26.946055    4101 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 3964
	I0917 02:10:26.946991    4101 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid 3964 missing from process table
	I0917 02:10:26.946995    4101 status.go:330] ha-857000 host status = "Stopped" (err=<nil>)
	I0917 02:10:26.947004    4101 status.go:343] host is not running, skipping remaining checks
	I0917 02:10:26.947009    4101 status.go:257] ha-857000 status: &{Name:ha-857000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 02:10:26.947034    4101 status.go:255] checking status of ha-857000-m02 ...
	I0917 02:10:26.947289    4101 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:26.947312    4101 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:26.955547    4101 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52078
	I0917 02:10:26.955899    4101 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:26.956262    4101 main.go:141] libmachine: Using API Version  1
	I0917 02:10:26.956281    4101 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:26.956486    4101 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:26.956605    4101 main.go:141] libmachine: (ha-857000-m02) Calling .GetState
	I0917 02:10:26.956693    4101 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:26.956766    4101 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 3976
	I0917 02:10:26.957656    4101 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid 3976 missing from process table
	I0917 02:10:26.957711    4101 status.go:330] ha-857000-m02 host status = "Stopped" (err=<nil>)
	I0917 02:10:26.957720    4101 status.go:343] host is not running, skipping remaining checks
	I0917 02:10:26.957725    4101 status.go:257] ha-857000-m02 status: &{Name:ha-857000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 02:10:26.957746    4101 status.go:255] checking status of ha-857000-m03 ...
	I0917 02:10:26.958019    4101 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:26.958046    4101 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:26.966675    4101 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52080
	I0917 02:10:26.967091    4101 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:26.967425    4101 main.go:141] libmachine: Using API Version  1
	I0917 02:10:26.967434    4101 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:26.967632    4101 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:26.967763    4101 main.go:141] libmachine: (ha-857000-m03) Calling .GetState
	I0917 02:10:26.967856    4101 main.go:141] libmachine: (ha-857000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:26.967933    4101 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid from json: 3442
	I0917 02:10:26.968836    4101 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid 3442 missing from process table
	I0917 02:10:26.968869    4101 status.go:330] ha-857000-m03 host status = "Stopped" (err=<nil>)
	I0917 02:10:26.968879    4101 status.go:343] host is not running, skipping remaining checks
	I0917 02:10:26.968885    4101 status.go:257] ha-857000-m03 status: &{Name:ha-857000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 02:10:26.968895    4101 status.go:255] checking status of ha-857000-m04 ...
	I0917 02:10:26.969154    4101 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:26.969176    4101 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:26.977591    4101 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52082
	I0917 02:10:26.977953    4101 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:26.978259    4101 main.go:141] libmachine: Using API Version  1
	I0917 02:10:26.978267    4101 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:26.978491    4101 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:26.978590    4101 main.go:141] libmachine: (ha-857000-m04) Calling .GetState
	I0917 02:10:26.978668    4101 main.go:141] libmachine: (ha-857000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:26.978733    4101 main.go:141] libmachine: (ha-857000-m04) DBG | hyperkit pid from json: 3550
	I0917 02:10:26.979656    4101 main.go:141] libmachine: (ha-857000-m04) DBG | hyperkit pid 3550 missing from process table
	I0917 02:10:26.979702    4101 status.go:330] ha-857000-m04 host status = "Stopped" (err=<nil>)
	I0917 02:10:26.979711    4101 status.go:343] host is not running, skipping remaining checks
	I0917 02:10:26.979716    4101 status.go:257] ha-857000-m04 status: &{Name:ha-857000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr": ha-857000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-857000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-857000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-857000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr": ha-857000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-857000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-857000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-857000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr": ha-857000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-857000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-857000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-857000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-857000 -n ha-857000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-857000 -n ha-857000: exit status 7 (68.957918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-857000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (160.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (160.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-857000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
E0917 02:11:35.821017    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:12:58.896555    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-857000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : (2m36.15074862s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr
ha_test.go:571: status says not two control-plane nodes are present: args "out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr": ha-857000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:574: status says not three hosts are running: args "out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr": ha-857000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:577: status says not three kubelets are running: args "out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr": ha-857000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:580: status says not two apiservers are running: args "out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr": ha-857000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' True
	 True
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-857000 -n ha-857000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-857000 logs -n 25: (3.273646022s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-857000 cp ha-857000-m03:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04:/home/docker/cp-test_ha-857000-m03_ha-857000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m04 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m03_ha-857000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-857000 cp testdata/cp-test.txt                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3490570692/001/cp-test_ha-857000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000:/home/docker/cp-test_ha-857000-m04_ha-857000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000 sudo cat                                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m02:/home/docker/cp-test_ha-857000-m04_ha-857000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m02 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m03:/home/docker/cp-test_ha-857000-m04_ha-857000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m03 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-857000 node stop m02 -v=7                                                                                                 | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-857000 node start m02 -v=7                                                                                                | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:05 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-857000 -v=7                                                                                                       | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:05 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-857000 -v=7                                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:05 PDT | 17 Sep 24 02:06 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-857000 --wait=true -v=7                                                                                                | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:06 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-857000                                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:07 PDT |                     |
	| node    | ha-857000 node delete m03 -v=7                                                                                               | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:07 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-857000 stop -v=7                                                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:07 PDT | 17 Sep 24 02:10 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-857000 --wait=true                                                                                                     | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:10 PDT | 17 Sep 24 02:13 PDT |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 02:10:27
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 02:10:27.105477    4110 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:10:27.105665    4110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:10:27.105670    4110 out.go:358] Setting ErrFile to fd 2...
	I0917 02:10:27.105674    4110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:10:27.105845    4110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:10:27.107332    4110 out.go:352] Setting JSON to false
	I0917 02:10:27.130053    4110 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2397,"bootTime":1726561830,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 02:10:27.130205    4110 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:10:27.152188    4110 out.go:177] * [ha-857000] minikube v1.34.0 on Darwin 14.6.1
	I0917 02:10:27.194040    4110 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:10:27.194117    4110 notify.go:220] Checking for updates...
	I0917 02:10:27.238575    4110 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:10:27.259736    4110 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 02:10:27.280930    4110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:10:27.301762    4110 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:10:27.322633    4110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:10:27.344421    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:10:27.344920    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:27.344973    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:27.354413    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52088
	I0917 02:10:27.354771    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:27.355142    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:10:27.355153    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:27.355356    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:27.355460    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:27.355684    4110 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:10:27.355976    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:27.356005    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:27.364420    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52090
	I0917 02:10:27.364811    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:27.365167    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:10:27.365180    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:27.365391    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:27.365504    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:27.393706    4110 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 02:10:27.435894    4110 start.go:297] selected driver: hyperkit
	I0917 02:10:27.435922    4110 start.go:901] validating driver "hyperkit" against &{Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:10:27.436195    4110 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:10:27.436329    4110 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:10:27.436542    4110 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19648-1025/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 02:10:27.445831    4110 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 02:10:27.449537    4110 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:27.449556    4110 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 02:10:27.452252    4110 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:10:27.452291    4110 cni.go:84] Creating CNI manager for ""
	I0917 02:10:27.452327    4110 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 02:10:27.452403    4110 start.go:340] cluster config:
	{Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:10:27.452523    4110 iso.go:125] acquiring lock: {Name:mkc407d80a0f8e78f0e24d63c464bb315c80139b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:10:27.494874    4110 out.go:177] * Starting "ha-857000" primary control-plane node in "ha-857000" cluster
	I0917 02:10:27.515806    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:10:27.515897    4110 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 02:10:27.515918    4110 cache.go:56] Caching tarball of preloaded images
	I0917 02:10:27.516138    4110 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:10:27.516158    4110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:10:27.516383    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:27.517269    4110 start.go:360] acquireMachinesLock for ha-857000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:10:27.517388    4110 start.go:364] duration metric: took 96.177µs to acquireMachinesLock for "ha-857000"
	I0917 02:10:27.517441    4110 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:10:27.517460    4110 fix.go:54] fixHost starting: 
	I0917 02:10:27.517898    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:27.517930    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:27.526784    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52092
	I0917 02:10:27.527129    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:27.527462    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:10:27.527473    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:27.527739    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:27.527880    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:27.527995    4110 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:10:27.528094    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:27.528210    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 3964
	I0917 02:10:27.529100    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid 3964 missing from process table
	I0917 02:10:27.529122    4110 fix.go:112] recreateIfNeeded on ha-857000: state=Stopped err=<nil>
	I0917 02:10:27.529141    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	W0917 02:10:27.529225    4110 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:10:27.570570    4110 out.go:177] * Restarting existing hyperkit VM for "ha-857000" ...
	I0917 02:10:27.591801    4110 main.go:141] libmachine: (ha-857000) Calling .Start
	I0917 02:10:27.592089    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:27.592131    4110 main.go:141] libmachine: (ha-857000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid
	I0917 02:10:27.592193    4110 main.go:141] libmachine: (ha-857000) DBG | Using UUID f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c
	I0917 02:10:27.699994    4110 main.go:141] libmachine: (ha-857000) DBG | Generated MAC c2:63:2b:63:80:76
	I0917 02:10:27.700019    4110 main.go:141] libmachine: (ha-857000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:10:27.700136    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b6e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:10:27.700165    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b6e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:10:27.700210    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/ha-857000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:10:27.700256    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/ha-857000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:10:27.700270    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:10:27.701709    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: Pid is 4124
	I0917 02:10:27.702059    4110 main.go:141] libmachine: (ha-857000) DBG | Attempt 0
	I0917 02:10:27.702070    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:27.702132    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 4124
	I0917 02:10:27.703343    4110 main.go:141] libmachine: (ha-857000) DBG | Searching for c2:63:2b:63:80:76 in /var/db/dhcpd_leases ...
	I0917 02:10:27.703398    4110 main.go:141] libmachine: (ha-857000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:10:27.703416    4110 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66e94781}
	I0917 02:10:27.703422    4110 main.go:141] libmachine: (ha-857000) DBG | Found match: c2:63:2b:63:80:76
	I0917 02:10:27.703434    4110 main.go:141] libmachine: (ha-857000) DBG | IP: 192.169.0.5
	I0917 02:10:27.703500    4110 main.go:141] libmachine: (ha-857000) Calling .GetConfigRaw
	I0917 02:10:27.704135    4110 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:10:27.704313    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:27.704745    4110 machine.go:93] provisionDockerMachine start ...
	I0917 02:10:27.704755    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:27.704862    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:27.704967    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:27.705062    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:27.705172    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:27.705289    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:27.705426    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:27.705645    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:27.705655    4110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:10:27.709824    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:10:27.761328    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:10:27.762023    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:10:27.762037    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:10:27.762058    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:10:27.762068    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:10:28.142704    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:10:28.142720    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:10:28.257454    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:10:28.257477    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:10:28.257500    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:10:28.257510    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:10:28.258332    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:10:28.258356    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:10:33.845455    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:33 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:10:33.845506    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:33 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:10:33.845516    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:33 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:10:33.869458    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:33 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:10:38.774269    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:10:38.774287    4110 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:10:38.774460    4110 buildroot.go:166] provisioning hostname "ha-857000"
	I0917 02:10:38.774470    4110 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:10:38.774556    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:38.774689    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:38.774787    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.774865    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.774959    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:38.775097    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:38.775254    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:38.775262    4110 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000 && echo "ha-857000" | sudo tee /etc/hostname
	I0917 02:10:38.842954    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000
	
	I0917 02:10:38.842972    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:38.843114    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:38.843224    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.843309    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.843398    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:38.843557    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:38.843701    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:38.843712    4110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:10:38.908790    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:10:38.908811    4110 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:10:38.908824    4110 buildroot.go:174] setting up certificates
	I0917 02:10:38.908830    4110 provision.go:84] configureAuth start
	I0917 02:10:38.908845    4110 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:10:38.908979    4110 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:10:38.909073    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:38.909177    4110 provision.go:143] copyHostCerts
	I0917 02:10:38.909208    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:10:38.909278    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:10:38.909287    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:10:38.909606    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:10:38.909812    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:10:38.909853    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:10:38.909857    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:10:38.909935    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:10:38.910085    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:10:38.910127    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:10:38.910132    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:10:38.910214    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:10:38.910362    4110 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000 san=[127.0.0.1 192.169.0.5 ha-857000 localhost minikube]
	I0917 02:10:38.962566    4110 provision.go:177] copyRemoteCerts
	I0917 02:10:38.962618    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:10:38.962632    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:38.962737    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:38.962836    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.962932    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:38.963020    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:38.998776    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:10:38.998851    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:10:39.018683    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:10:39.018741    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0917 02:10:39.038754    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:10:39.038814    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:10:39.058064    4110 provision.go:87] duration metric: took 149.217348ms to configureAuth
	I0917 02:10:39.058076    4110 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:10:39.058257    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:10:39.058270    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:39.058416    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:39.058513    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:39.058598    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.058689    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.058780    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:39.058915    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:39.059035    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:39.059042    4110 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:10:39.117847    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:10:39.117859    4110 buildroot.go:70] root file system type: tmpfs
	I0917 02:10:39.117937    4110 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:10:39.117952    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:39.118078    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:39.118171    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.118258    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.118338    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:39.118469    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:39.118616    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:39.118663    4110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:10:39.186097    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:10:39.186120    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:39.186247    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:39.186347    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.186426    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.186527    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:39.186659    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:39.186806    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:39.186817    4110 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:10:40.814202    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:10:40.814217    4110 machine.go:96] duration metric: took 13.109237782s to provisionDockerMachine
	I0917 02:10:40.814229    4110 start.go:293] postStartSetup for "ha-857000" (driver="hyperkit")
	I0917 02:10:40.814236    4110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:10:40.814246    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:40.814438    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:10:40.814456    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:40.814571    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:40.814667    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:40.814762    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:40.814848    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:40.854204    4110 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:10:40.857656    4110 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:10:40.857668    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:10:40.857773    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:10:40.857955    4110 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:10:40.857962    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:10:40.858166    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:10:40.867201    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:10:40.895727    4110 start.go:296] duration metric: took 81.487995ms for postStartSetup
	I0917 02:10:40.895754    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:40.895937    4110 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:10:40.895964    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:40.896062    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:40.896140    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:40.896211    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:40.896292    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:40.931812    4110 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:10:40.931872    4110 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:10:40.965671    4110 fix.go:56] duration metric: took 13.447980679s for fixHost
	I0917 02:10:40.965693    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:40.965831    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:40.965924    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:40.966013    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:40.966122    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:40.966261    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:40.966403    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:40.966410    4110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:10:41.023835    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726564240.935930388
	
	I0917 02:10:41.023847    4110 fix.go:216] guest clock: 1726564240.935930388
	I0917 02:10:41.023853    4110 fix.go:229] Guest: 2024-09-17 02:10:40.935930388 -0700 PDT Remote: 2024-09-17 02:10:40.965683 -0700 PDT m=+13.896006994 (delta=-29.752612ms)
	I0917 02:10:41.023870    4110 fix.go:200] guest clock delta is within tolerance: -29.752612ms
	I0917 02:10:41.023873    4110 start.go:83] releasing machines lock for "ha-857000", held for 13.506240986s
	I0917 02:10:41.023893    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:41.024017    4110 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:10:41.024124    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:41.024416    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:41.024496    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:41.024577    4110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:10:41.024607    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:41.024622    4110 ssh_runner.go:195] Run: cat /version.json
	I0917 02:10:41.024633    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:41.024692    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:41.024731    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:41.024799    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:41.024812    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:41.024882    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:41.024908    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:41.025002    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:41.025031    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:41.057444    4110 ssh_runner.go:195] Run: systemctl --version
	I0917 02:10:41.119261    4110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 02:10:41.123760    4110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:10:41.123809    4110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:10:41.136297    4110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:10:41.136307    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:10:41.136412    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:10:41.153182    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:10:41.162387    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:10:41.171363    4110 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:10:41.171411    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:10:41.180339    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:10:41.189205    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:10:41.198331    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:10:41.207214    4110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:10:41.216288    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:10:41.225185    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:10:41.234170    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:10:41.243192    4110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:10:41.251363    4110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:10:41.259648    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:41.359254    4110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:10:41.378053    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:10:41.378144    4110 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:10:41.391608    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:10:41.406431    4110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:10:41.426598    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:10:41.437654    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:10:41.448507    4110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:10:41.470118    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:10:41.481632    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:10:41.496609    4110 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:10:41.499690    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:10:41.507723    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:10:41.520894    4110 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:10:41.633690    4110 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:10:41.735063    4110 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:10:41.735129    4110 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:10:41.749181    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:41.842846    4110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:10:44.137188    4110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.294283491s)
	I0917 02:10:44.137256    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:10:44.147554    4110 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:10:44.160480    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:10:44.170998    4110 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:10:44.262329    4110 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:10:44.355414    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:44.456404    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:10:44.470268    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:10:44.481488    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:44.585298    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:10:44.651024    4110 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:10:44.651127    4110 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:10:44.655468    4110 start.go:563] Will wait 60s for crictl version
	I0917 02:10:44.655523    4110 ssh_runner.go:195] Run: which crictl
	I0917 02:10:44.660816    4110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:10:44.685805    4110 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:10:44.685900    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:10:44.701620    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:10:44.762577    4110 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:10:44.762643    4110 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:10:44.763055    4110 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:10:44.767764    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:10:44.778676    4110 kubeadm.go:883] updating cluster {Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 02:10:44.778770    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:10:44.778845    4110 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:10:44.792490    4110 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:10:44.792502    4110 docker.go:615] Images already preloaded, skipping extraction
	I0917 02:10:44.792587    4110 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:10:44.806122    4110 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:10:44.806141    4110 cache_images.go:84] Images are preloaded, skipping loading
	I0917 02:10:44.806152    4110 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0917 02:10:44.806226    4110 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:10:44.806308    4110 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 02:10:44.838425    4110 cni.go:84] Creating CNI manager for ""
	I0917 02:10:44.838438    4110 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 02:10:44.838451    4110 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 02:10:44.838467    4110 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-857000 NodeName:ha-857000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 02:10:44.838548    4110 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-857000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 02:10:44.838565    4110 kube-vip.go:115] generating kube-vip config ...
	I0917 02:10:44.838624    4110 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 02:10:44.852006    4110 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 02:10:44.852072    4110 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 02:10:44.852126    4110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:10:44.861875    4110 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:10:44.861926    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 02:10:44.870065    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0917 02:10:44.883323    4110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:10:44.896671    4110 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0917 02:10:44.910190    4110 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 02:10:44.923776    4110 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:10:44.926683    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:10:44.936751    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:45.031050    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:10:45.045803    4110 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.5
	I0917 02:10:45.045815    4110 certs.go:194] generating shared ca certs ...
	I0917 02:10:45.045826    4110 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:10:45.046013    4110 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:10:45.046090    4110 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:10:45.046101    4110 certs.go:256] generating profile certs ...
	I0917 02:10:45.046208    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key
	I0917 02:10:45.046290    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea
	I0917 02:10:45.046357    4110 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key
	I0917 02:10:45.046364    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:10:45.046385    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:10:45.046406    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:10:45.046424    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:10:45.046442    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 02:10:45.046474    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 02:10:45.046503    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 02:10:45.046520    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 02:10:45.046624    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:10:45.046679    4110 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:10:45.046688    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:10:45.046749    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:10:45.046790    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:10:45.046829    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:10:45.046908    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:10:45.046945    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:10:45.046966    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:10:45.046984    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:10:45.047483    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:10:45.080356    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:10:45.112920    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:10:45.138450    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:10:45.175252    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 02:10:45.218044    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 02:10:45.251977    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:10:45.309085    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 02:10:45.353596    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:10:45.384476    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:10:45.404778    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:10:45.423525    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 02:10:45.437207    4110 ssh_runner.go:195] Run: openssl version
	I0917 02:10:45.441704    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:10:45.450346    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:10:45.453899    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:10:45.453945    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:10:45.458361    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:10:45.466854    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:10:45.475379    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:10:45.478924    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:10:45.478963    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:10:45.483279    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:10:45.491638    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:10:45.500375    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:10:45.504070    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:10:45.504128    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:10:45.508583    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:10:45.516977    4110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:10:45.520582    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:10:45.524889    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:10:45.529282    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:10:45.533668    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:10:45.538022    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:10:45.542262    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:10:45.546447    4110 kubeadm.go:392] StartCluster: {Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:10:45.546579    4110 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 02:10:45.558935    4110 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 02:10:45.566714    4110 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 02:10:45.566724    4110 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 02:10:45.566760    4110 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 02:10:45.574257    4110 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:10:45.574553    4110 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-857000" does not appear in /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:10:45.574638    4110 kubeconfig.go:62] /Users/jenkins/minikube-integration/19648-1025/kubeconfig needs updating (will repair): [kubeconfig missing "ha-857000" cluster setting kubeconfig missing "ha-857000" context setting]
	I0917 02:10:45.574818    4110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:10:45.575437    4110 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:10:45.575640    4110 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x673a720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:10:45.575954    4110 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 02:10:45.576155    4110 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 02:10:45.583535    4110 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0917 02:10:45.583548    4110 kubeadm.go:597] duration metric: took 16.820219ms to restartPrimaryControlPlane
	I0917 02:10:45.583553    4110 kubeadm.go:394] duration metric: took 37.114772ms to StartCluster
	I0917 02:10:45.583562    4110 settings.go:142] acquiring lock: {Name:mk5de70c670a23cf49fdbd19ddb5c1d2a9de9a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:10:45.583637    4110 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:10:45.584029    4110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:10:45.584244    4110 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:10:45.584257    4110 start.go:241] waiting for startup goroutines ...
	I0917 02:10:45.584290    4110 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 02:10:45.584399    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:10:45.629290    4110 out.go:177] * Enabled addons: 
	I0917 02:10:45.650483    4110 addons.go:510] duration metric: took 66.114939ms for enable addons: enabled=[]
	I0917 02:10:45.650526    4110 start.go:246] waiting for cluster config update ...
	I0917 02:10:45.650541    4110 start.go:255] writing updated cluster config ...
	I0917 02:10:45.672110    4110 out.go:201] 
	I0917 02:10:45.693671    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:10:45.693812    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:45.716376    4110 out.go:177] * Starting "ha-857000-m02" control-plane node in "ha-857000" cluster
	I0917 02:10:45.758138    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:10:45.758205    4110 cache.go:56] Caching tarball of preloaded images
	I0917 02:10:45.758422    4110 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:10:45.758440    4110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:10:45.758566    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:45.759523    4110 start.go:360] acquireMachinesLock for ha-857000-m02: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:10:45.759643    4110 start.go:364] duration metric: took 94.526µs to acquireMachinesLock for "ha-857000-m02"
	I0917 02:10:45.759684    4110 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:10:45.759694    4110 fix.go:54] fixHost starting: m02
	I0917 02:10:45.760135    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:45.760170    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:45.769422    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52114
	I0917 02:10:45.769778    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:45.770120    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:10:45.770130    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:45.770332    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:45.770446    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:10:45.770540    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetState
	I0917 02:10:45.770620    4110 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:45.770696    4110 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 3976
	I0917 02:10:45.771617    4110 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid 3976 missing from process table
	I0917 02:10:45.771641    4110 fix.go:112] recreateIfNeeded on ha-857000-m02: state=Stopped err=<nil>
	I0917 02:10:45.771648    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	W0917 02:10:45.771734    4110 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:10:45.793214    4110 out.go:177] * Restarting existing hyperkit VM for "ha-857000-m02" ...
	I0917 02:10:45.835194    4110 main.go:141] libmachine: (ha-857000-m02) Calling .Start
	I0917 02:10:45.835422    4110 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:45.835478    4110 main.go:141] libmachine: (ha-857000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid
	I0917 02:10:45.836481    4110 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid 3976 missing from process table
	I0917 02:10:45.836493    4110 main.go:141] libmachine: (ha-857000-m02) DBG | pid 3976 is in state "Stopped"
	I0917 02:10:45.836506    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid...
	I0917 02:10:45.836730    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Using UUID 1940a045-0b1b-4b28-842d-d4858a62cbd3
	I0917 02:10:45.862461    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Generated MAC 9a:95:4e:4b:65:fe
	I0917 02:10:45.862487    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:10:45.862599    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1940a045-0b1b-4b28-842d-d4858a62cbd3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:10:45.862645    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1940a045-0b1b-4b28-842d-d4858a62cbd3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:10:45.862683    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1940a045-0b1b-4b28-842d-d4858a62cbd3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/ha-857000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machine
s/ha-857000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:10:45.862720    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1940a045-0b1b-4b28-842d-d4858a62cbd3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/ha-857000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:10:45.862741    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:10:45.864138    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: Pid is 4131
	I0917 02:10:45.864563    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Attempt 0
	I0917 02:10:45.864573    4110 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:45.864635    4110 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 4131
	I0917 02:10:45.866426    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Searching for 9a:95:4e:4b:65:fe in /var/db/dhcpd_leases ...
	I0917 02:10:45.866511    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:10:45.866527    4110 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:10:45.866546    4110 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea9817}
	I0917 02:10:45.866556    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Found match: 9a:95:4e:4b:65:fe
	I0917 02:10:45.866585    4110 main.go:141] libmachine: (ha-857000-m02) DBG | IP: 192.169.0.6
	I0917 02:10:45.866617    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetConfigRaw
	I0917 02:10:45.867379    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:10:45.867624    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:45.868172    4110 machine.go:93] provisionDockerMachine start ...
	I0917 02:10:45.868192    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:10:45.868319    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:10:45.868433    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:10:45.868540    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:10:45.868629    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:10:45.868743    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:10:45.868892    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:45.869038    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:10:45.869047    4110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:10:45.871979    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:10:45.880237    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:10:45.881261    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:10:45.881280    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:10:45.881317    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:10:45.881331    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:10:46.263104    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:10:46.263119    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:10:46.377844    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:10:46.377864    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:10:46.377874    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:10:46.377890    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:10:46.378727    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:10:46.378736    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:10:51.977750    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:51 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 02:10:51.977833    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:51 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 02:10:51.977841    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:51 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 02:10:52.002295    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:52 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 02:11:20.931384    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:11:20.931398    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:11:20.931549    4110 buildroot.go:166] provisioning hostname "ha-857000-m02"
	I0917 02:11:20.931560    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:11:20.931664    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:20.931762    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:20.931855    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:20.931937    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:20.932033    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:20.932169    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:20.932351    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:20.932359    4110 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000-m02 && echo "ha-857000-m02" | sudo tee /etc/hostname
	I0917 02:11:20.993183    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000-m02
	
	I0917 02:11:20.993198    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:20.993326    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:20.993440    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:20.993526    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:20.993618    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:20.993763    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:20.993914    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:20.993925    4110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:11:21.050925    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:11:21.050951    4110 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:11:21.050960    4110 buildroot.go:174] setting up certificates
	I0917 02:11:21.050966    4110 provision.go:84] configureAuth start
	I0917 02:11:21.050972    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:11:21.051109    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:11:21.051192    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.051304    4110 provision.go:143] copyHostCerts
	I0917 02:11:21.051330    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:11:21.051388    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:11:21.051394    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:11:21.051551    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:11:21.051732    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:11:21.051778    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:11:21.051784    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:11:21.051862    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:11:21.051999    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:11:21.052037    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:11:21.052041    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:11:21.052127    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:11:21.052261    4110 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000-m02 san=[127.0.0.1 192.169.0.6 ha-857000-m02 localhost minikube]
	I0917 02:11:21.131473    4110 provision.go:177] copyRemoteCerts
	I0917 02:11:21.131534    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:11:21.131551    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.131683    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:21.131772    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.131866    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:21.131988    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:11:21.165457    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:11:21.165530    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:11:21.185353    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:11:21.185424    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 02:11:21.204885    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:11:21.204944    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 02:11:21.224555    4110 provision.go:87] duration metric: took 173.578725ms to configureAuth
	I0917 02:11:21.224572    4110 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:11:21.224752    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:21.224765    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:21.224898    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.224985    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:21.225071    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.225151    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.225226    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:21.225334    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:21.225453    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:21.225471    4110 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:11:21.276594    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:11:21.276610    4110 buildroot.go:70] root file system type: tmpfs
	I0917 02:11:21.276682    4110 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:11:21.276692    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.276824    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:21.276911    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.276982    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.277068    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:21.277206    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:21.277343    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:21.277390    4110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:11:21.338440    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:11:21.338457    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.338602    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:21.338693    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.338786    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.338878    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:21.339018    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:21.339165    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:21.339180    4110 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:11:23.000541    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:11:23.000557    4110 machine.go:96] duration metric: took 37.131734761s to provisionDockerMachine
	I0917 02:11:23.000565    4110 start.go:293] postStartSetup for "ha-857000-m02" (driver="hyperkit")
	I0917 02:11:23.000572    4110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:11:23.000581    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.000771    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:11:23.000784    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.000877    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.000970    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.001060    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.001151    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:11:23.034070    4110 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:11:23.037044    4110 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:11:23.037054    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:11:23.037149    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:11:23.037326    4110 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:11:23.037333    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:11:23.037542    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:11:23.045540    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:11:23.064134    4110 start.go:296] duration metric: took 63.560241ms for postStartSetup
	I0917 02:11:23.064153    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.064355    4110 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:11:23.064367    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.064443    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.064537    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.064625    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.064699    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:11:23.096648    4110 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:11:23.096719    4110 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:11:23.150750    4110 fix.go:56] duration metric: took 37.39040777s for fixHost
	I0917 02:11:23.150781    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.150933    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.151043    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.151139    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.151225    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.151344    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:23.151480    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:23.151487    4110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:11:23.205108    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726564282.931256187
	
	I0917 02:11:23.205121    4110 fix.go:216] guest clock: 1726564282.931256187
	I0917 02:11:23.205126    4110 fix.go:229] Guest: 2024-09-17 02:11:22.931256187 -0700 PDT Remote: 2024-09-17 02:11:23.150765 -0700 PDT m=+56.080359699 (delta=-219.508813ms)
	I0917 02:11:23.205134    4110 fix.go:200] guest clock delta is within tolerance: -219.508813ms
	I0917 02:11:23.205138    4110 start.go:83] releasing machines lock for "ha-857000-m02", held for 37.444836088s
	I0917 02:11:23.205153    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.205283    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:11:23.226836    4110 out.go:177] * Found network options:
	I0917 02:11:23.247780    4110 out.go:177]   - NO_PROXY=192.169.0.5
	W0917 02:11:23.268466    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:23.268508    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.269341    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.269597    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.269778    4110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0917 02:11:23.269794    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:23.269828    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.269896    4110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:11:23.269915    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.270129    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.270153    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.270351    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.270407    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.270526    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.270571    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.270741    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:11:23.270760    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	W0917 02:11:23.355936    4110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:11:23.356046    4110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:11:23.371785    4110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:11:23.371805    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:11:23.371897    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:11:23.389343    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:11:23.397507    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:11:23.405706    4110 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:11:23.405760    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:11:23.413954    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:11:23.422064    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:11:23.430077    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:11:23.438247    4110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:11:23.446615    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:11:23.455025    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:11:23.463904    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:11:23.472877    4110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:11:23.480886    4110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:11:23.488979    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:23.586431    4110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:11:23.605512    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:11:23.605590    4110 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:11:23.619031    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:11:23.632481    4110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:11:23.650301    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:11:23.661034    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:11:23.671499    4110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:11:23.693809    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:11:23.704324    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:11:23.719425    4110 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:11:23.722279    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:11:23.729409    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:11:23.743121    4110 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:11:23.848749    4110 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:11:23.947630    4110 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:11:23.947661    4110 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:11:23.965207    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:24.060164    4110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:11:26.333778    4110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.273556023s)
	I0917 02:11:26.333847    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:11:26.345198    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:11:26.355965    4110 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:11:26.461793    4110 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:11:26.556361    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:26.674366    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:11:26.687753    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:11:26.697698    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:26.797118    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:11:26.861306    4110 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:11:26.861392    4110 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:11:26.865857    4110 start.go:563] Will wait 60s for crictl version
	I0917 02:11:26.865915    4110 ssh_runner.go:195] Run: which crictl
	I0917 02:11:26.869732    4110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:11:26.894886    4110 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:11:26.894999    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:11:26.911893    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:11:26.950833    4110 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:11:26.972458    4110 out.go:177]   - env NO_PROXY=192.169.0.5
	I0917 02:11:26.993284    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:11:26.993711    4110 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:11:26.998329    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:11:27.008512    4110 mustload.go:65] Loading cluster: ha-857000
	I0917 02:11:27.008684    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:27.008920    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:11:27.008943    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:11:27.017607    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52136
	I0917 02:11:27.017941    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:11:27.018292    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:11:27.018310    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:11:27.018503    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:11:27.018620    4110 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:11:27.018699    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:11:27.018771    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 4124
	I0917 02:11:27.019715    4110 host.go:66] Checking if "ha-857000" exists ...
	I0917 02:11:27.019989    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:11:27.020015    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:11:27.028562    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52138
	I0917 02:11:27.028902    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:11:27.029241    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:11:27.029257    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:11:27.029461    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:11:27.029566    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:11:27.029665    4110 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.6
	I0917 02:11:27.029672    4110 certs.go:194] generating shared ca certs ...
	I0917 02:11:27.029680    4110 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:11:27.029857    4110 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:11:27.029930    4110 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:11:27.029938    4110 certs.go:256] generating profile certs ...
	I0917 02:11:27.030058    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key
	I0917 02:11:27.030140    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.d3e75930
	I0917 02:11:27.030214    4110 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key
	I0917 02:11:27.030221    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:11:27.030242    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:11:27.030266    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:11:27.030285    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:11:27.030303    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 02:11:27.030337    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 02:11:27.030366    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 02:11:27.030389    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 02:11:27.030486    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:11:27.030540    4110 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:11:27.030549    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:11:27.030587    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:11:27.030621    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:11:27.030651    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:11:27.030716    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:11:27.030753    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:11:27.030774    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:11:27.030792    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:11:27.030816    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:11:27.030911    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:11:27.031000    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:11:27.031078    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:11:27.031162    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:11:27.058778    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 02:11:27.062313    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 02:11:27.070939    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 02:11:27.074280    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 02:11:27.083003    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 02:11:27.086057    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 02:11:27.094554    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 02:11:27.097659    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 02:11:27.106657    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 02:11:27.109894    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 02:11:27.118370    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 02:11:27.121478    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 02:11:27.130386    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:11:27.150256    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:11:27.169526    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:11:27.188769    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:11:27.207966    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 02:11:27.227067    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 02:11:27.246289    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:11:27.265271    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 02:11:27.284669    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:11:27.303761    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:11:27.323113    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:11:27.342331    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 02:11:27.355765    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 02:11:27.369277    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 02:11:27.382837    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 02:11:27.396474    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 02:11:27.410313    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 02:11:27.423731    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 02:11:27.437366    4110 ssh_runner.go:195] Run: openssl version
	I0917 02:11:27.441447    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:11:27.450619    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:11:27.453941    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:11:27.453997    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:11:27.458171    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:11:27.467199    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:11:27.476144    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:11:27.479431    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:11:27.479473    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:11:27.483603    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:11:27.492580    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:11:27.501517    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:11:27.504871    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:11:27.504915    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:11:27.509027    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:11:27.517892    4110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:11:27.521155    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:11:27.525378    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:11:27.529633    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:11:27.533810    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:11:27.538003    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:11:27.542137    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:11:27.546288    4110 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.1 docker true true} ...
	I0917 02:11:27.546336    4110 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:11:27.546350    4110 kube-vip.go:115] generating kube-vip config ...
	I0917 02:11:27.546384    4110 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 02:11:27.558948    4110 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 02:11:27.558990    4110 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 02:11:27.559048    4110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:11:27.568292    4110 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:11:27.568351    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 02:11:27.577686    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 02:11:27.591394    4110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:11:27.604835    4110 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 02:11:27.618390    4110 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:11:27.621271    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:11:27.630851    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:27.729065    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:11:27.743762    4110 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:11:27.743972    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:27.765105    4110 out.go:177] * Verifying Kubernetes components...
	I0917 02:11:27.805899    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:27.933521    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:11:27.948089    4110 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:11:27.948282    4110 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x673a720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 02:11:27.948321    4110 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0917 02:11:27.948495    4110 node_ready.go:35] waiting up to 6m0s for node "ha-857000-m02" to be "Ready" ...
	I0917 02:11:27.948579    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:27.948584    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:27.948591    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:27.948595    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:28.948736    4110 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0917 02:11:28.948861    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:28.948870    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:28.948878    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:28.948882    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.256443    4110 round_trippers.go:574] Response Status: 200 OK in 7307 milliseconds
	I0917 02:11:36.257038    4110 node_ready.go:49] node "ha-857000-m02" has status "Ready":"True"
	I0917 02:11:36.257051    4110 node_ready.go:38] duration metric: took 8.308394835s for node "ha-857000-m02" to be "Ready" ...
	I0917 02:11:36.257061    4110 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:11:36.257098    4110 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 02:11:36.257107    4110 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 02:11:36.257147    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:36.257152    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.257158    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.257164    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.271996    4110 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0917 02:11:36.280676    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.280736    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:11:36.280742    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.280752    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.280756    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.307985    4110 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0917 02:11:36.308476    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.308484    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.308491    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.308501    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.312984    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:11:36.313392    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.313402    4110 pod_ready.go:82] duration metric: took 32.709315ms for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.313409    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.313452    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nl5j5
	I0917 02:11:36.313457    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.313463    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.313468    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.319771    4110 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 02:11:36.320384    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.320393    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.320400    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.320403    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.322816    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:36.323378    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.323388    4110 pod_ready.go:82] duration metric: took 9.97387ms for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.323395    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.323435    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000
	I0917 02:11:36.323440    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.323446    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.323450    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.327486    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:11:36.328047    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.328054    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.328060    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.328063    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.331571    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:36.332110    4110 pod_ready.go:93] pod "etcd-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.332121    4110 pod_ready.go:82] duration metric: took 8.720083ms for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.332128    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.332168    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m02
	I0917 02:11:36.332173    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.332179    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.332184    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.336324    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:11:36.336846    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:36.336854    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.336860    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.336864    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.340608    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:36.341048    4110 pod_ready.go:93] pod "etcd-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.341057    4110 pod_ready.go:82] duration metric: took 8.92351ms for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.341064    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.341104    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m03
	I0917 02:11:36.341110    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.341116    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.341121    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.343462    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:36.458248    4110 request.go:632] Waited for 114.333049ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:36.458307    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:36.458312    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.458318    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.458326    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.466021    4110 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 02:11:36.466526    4110 pod_ready.go:93] pod "etcd-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.466536    4110 pod_ready.go:82] duration metric: took 125.46489ms for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.466548    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.657514    4110 request.go:632] Waited for 190.921312ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:11:36.657567    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:11:36.657574    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.657579    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.657584    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.659804    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:36.857671    4110 request.go:632] Waited for 197.395211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.857701    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.857705    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.857711    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.857715    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.861065    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:36.861653    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.861669    4110 pod_ready.go:82] duration metric: took 395.104039ms for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.861677    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.057332    4110 request.go:632] Waited for 195.603008ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:11:37.057382    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:11:37.057387    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.057393    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.057398    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.060216    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:37.258671    4110 request.go:632] Waited for 197.954534ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:37.258706    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:37.258713    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.258721    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.258727    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.267718    4110 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 02:11:37.268069    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:37.268082    4110 pod_ready.go:82] duration metric: took 406.392892ms for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.268090    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.457925    4110 request.go:632] Waited for 189.791882ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:11:37.457975    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:11:37.457980    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.457987    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.457992    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.461663    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:37.658806    4110 request.go:632] Waited for 196.487027ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:37.658861    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:37.658867    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.658874    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.658878    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.661429    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:37.661888    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:37.661897    4110 pod_ready.go:82] duration metric: took 393.794602ms for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.661905    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.857414    4110 request.go:632] Waited for 195.469923ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:11:37.857468    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:11:37.857474    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.857481    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.857486    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.860019    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.057880    4110 request.go:632] Waited for 197.333642ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:38.057910    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:38.057915    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.057922    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.057927    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.060540    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.061091    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:38.061101    4110 pod_ready.go:82] duration metric: took 399.184022ms for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:38.061109    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:38.257757    4110 request.go:632] Waited for 196.608954ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:11:38.257857    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:11:38.257871    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.257877    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.257882    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.259904    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.458082    4110 request.go:632] Waited for 197.709678ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:38.458138    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:38.458147    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.458154    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.458158    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.460347    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.460715    4110 pod_ready.go:98] node "ha-857000-m02" hosting pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:38.460726    4110 pod_ready.go:82] duration metric: took 399.604676ms for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	E0917 02:11:38.460732    4110 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-857000-m02" hosting pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:38.460739    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:38.658188    4110 request.go:632] Waited for 197.403717ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:11:38.658255    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:11:38.658261    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.658267    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.658271    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.660934    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.857786    4110 request.go:632] Waited for 196.168284ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:38.857841    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:38.857851    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.857863    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.857873    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.861470    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:38.861751    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:38.861759    4110 pod_ready.go:82] duration metric: took 401.003253ms for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:38.861766    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.057800    4110 request.go:632] Waited for 195.986319ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:11:39.057882    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:11:39.057893    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.057904    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.057912    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.061639    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:39.257697    4110 request.go:632] Waited for 195.312452ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:11:39.257726    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:11:39.257731    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.257737    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.257741    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.260209    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:39.260462    4110 pod_ready.go:93] pod "kube-proxy-528ht" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:39.260471    4110 pod_ready.go:82] duration metric: took 398.692905ms for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.260478    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.459321    4110 request.go:632] Waited for 198.788481ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:11:39.459387    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:11:39.459394    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.459411    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.459422    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.461885    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:39.657441    4110 request.go:632] Waited for 195.121107ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:39.657541    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:39.657551    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.657579    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.657585    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.661441    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:39.661929    4110 pod_ready.go:93] pod "kube-proxy-g9wxm" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:39.661942    4110 pod_ready.go:82] duration metric: took 401.451734ms for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.661951    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.857721    4110 request.go:632] Waited for 195.727193ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:11:39.857785    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:11:39.857791    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.857797    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.857802    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.859663    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:11:40.058574    4110 request.go:632] Waited for 198.443343ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:40.058668    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:40.058679    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.058690    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.058699    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.062499    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:40.063124    4110 pod_ready.go:93] pod "kube-proxy-vskbj" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:40.063133    4110 pod_ready.go:82] duration metric: took 401.170349ms for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:40.063140    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:40.257873    4110 request.go:632] Waited for 194.653928ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:11:40.257927    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:11:40.257937    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.257948    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.257956    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.262255    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:11:40.458287    4110 request.go:632] Waited for 195.380222ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:40.458411    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:40.458421    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.458432    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.458443    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.462171    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:40.462629    4110 pod_ready.go:98] node "ha-857000-m02" hosting pod "kube-proxy-zrqvr" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:40.462643    4110 pod_ready.go:82] duration metric: took 399.490798ms for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	E0917 02:11:40.462673    4110 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-857000-m02" hosting pod "kube-proxy-zrqvr" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:40.462687    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:40.658101    4110 request.go:632] Waited for 195.359912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:11:40.658147    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:11:40.658152    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.658159    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.658164    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.660407    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:40.858455    4110 request.go:632] Waited for 197.559018ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:40.858564    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:40.858583    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.858595    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.858601    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.861876    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:40.862327    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:40.862336    4110 pod_ready.go:82] duration metric: took 399.635382ms for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:40.862343    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:41.057949    4110 request.go:632] Waited for 195.512959ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:11:41.058021    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:11:41.058032    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.058044    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.058051    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.061708    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:41.257802    4110 request.go:632] Waited for 195.475163ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:41.257884    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:41.257895    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.257906    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.257913    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.261190    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:41.261502    4110 pod_ready.go:98] node "ha-857000-m02" hosting pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:41.261513    4110 pod_ready.go:82] duration metric: took 399.156939ms for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	E0917 02:11:41.261527    4110 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-857000-m02" hosting pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:41.261532    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:41.458981    4110 request.go:632] Waited for 197.407496ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:11:41.459061    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:11:41.459070    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.459078    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.459084    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.461880    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:41.657846    4110 request.go:632] Waited for 195.542216ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:41.657906    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:41.657913    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.657921    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.657934    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.660204    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:41.660601    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:41.660610    4110 pod_ready.go:82] duration metric: took 399.066544ms for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:41.660617    4110 pod_ready.go:39] duration metric: took 5.403454072s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:11:41.660636    4110 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:11:41.660697    4110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:11:41.672821    4110 api_server.go:72] duration metric: took 13.928795458s to wait for apiserver process to appear ...
	I0917 02:11:41.672831    4110 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:11:41.672845    4110 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0917 02:11:41.683603    4110 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0917 02:11:41.683654    4110 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0917 02:11:41.683660    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.683666    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.683670    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.684276    4110 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 02:11:41.684340    4110 api_server.go:141] control plane version: v1.31.1
	I0917 02:11:41.684350    4110 api_server.go:131] duration metric: took 11.515194ms to wait for apiserver health ...
	I0917 02:11:41.684356    4110 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 02:11:41.857675    4110 request.go:632] Waited for 173.274042ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:41.857793    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:41.857803    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.857823    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.857833    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.863157    4110 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 02:11:41.868330    4110 system_pods.go:59] 26 kube-system pods found
	I0917 02:11:41.868348    4110 system_pods.go:61] "coredns-7c65d6cfc9-fg65r" [1690cf49-3cd5-45ba-bcff-6c2947fb1bef] Running
	I0917 02:11:41.868352    4110 system_pods.go:61] "coredns-7c65d6cfc9-nl5j5" [dad0f1c6-0feb-4024-b0ee-95776c68bae8] Running
	I0917 02:11:41.868360    4110 system_pods.go:61] "etcd-ha-857000" [e9da37c5-ad0f-47e0-b65f-361d2cb9f204] Running
	I0917 02:11:41.868366    4110 system_pods.go:61] "etcd-ha-857000-m02" [f540f85a-9556-44c0-a560-188721a58bd5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 02:11:41.868371    4110 system_pods.go:61] "etcd-ha-857000-m03" [46cb6c7e-73e2-49b0-986f-c63a23ffa29d] Running
	I0917 02:11:41.868377    4110 system_pods.go:61] "kindnet-4jk9v" [24a018c6-9cbb-4d17-a295-8fef456534a0] Running
	I0917 02:11:41.868392    4110 system_pods.go:61] "kindnet-7pf7v" [eecd1421-3a2f-4e48-b2b2-abcbef7869e7] Running
	I0917 02:11:41.868398    4110 system_pods.go:61] "kindnet-vc6z5" [a92e41cb-74d1-4bd4-bffd-b807c859efd3] Running
	I0917 02:11:41.868402    4110 system_pods.go:61] "kindnet-vh2h2" [00d83b6a-e87e-4c36-b073-36e67c76d67d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 02:11:41.868406    4110 system_pods.go:61] "kube-apiserver-ha-857000" [b5451c9b-3be2-4454-bed6-fdc48031180e] Running
	I0917 02:11:41.868424    4110 system_pods.go:61] "kube-apiserver-ha-857000-m02" [d043a0e6-6ab2-47b6-bb82-ceff496f8336] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 02:11:41.868430    4110 system_pods.go:61] "kube-apiserver-ha-857000-m03" [b3d91c89-8830-4dd9-8c20-4d6f821a3d88] Running
	I0917 02:11:41.868434    4110 system_pods.go:61] "kube-controller-manager-ha-857000" [f4b0f56c-8d7d-4567-a4eb-088805c36c54] Running
	I0917 02:11:41.868438    4110 system_pods.go:61] "kube-controller-manager-ha-857000-m02" [16670a48-0dc2-4665-9b1d-5fc25337ec58] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 02:11:41.868442    4110 system_pods.go:61] "kube-controller-manager-ha-857000-m03" [754fdff4-0e27-4f32-ab12-3a6695924396] Running
	I0917 02:11:41.868445    4110 system_pods.go:61] "kube-proxy-528ht" [18b29dac-e4bf-4f26-988f-b1ba4019f9bc] Running
	I0917 02:11:41.868448    4110 system_pods.go:61] "kube-proxy-g9wxm" [a5b974f1-ed28-4f42-8c86-6fed0bf32317] Running
	I0917 02:11:41.868450    4110 system_pods.go:61] "kube-proxy-vskbj" [7a396757-8954-48d2-b708-dcdfbab21dc7] Running
	I0917 02:11:41.868454    4110 system_pods.go:61] "kube-proxy-zrqvr" [bb98af15-e15a-4904-b68d-2de3c6e5864c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 02:11:41.868456    4110 system_pods.go:61] "kube-scheduler-ha-857000" [fdce329c-0340-4e0a-8bc1-295e4970708b] Running
	I0917 02:11:41.868468    4110 system_pods.go:61] "kube-scheduler-ha-857000-m02" [274256f9-cd98-4dd3-befc-35443dcca033] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 02:11:41.868473    4110 system_pods.go:61] "kube-scheduler-ha-857000-m03" [74d4633a-1c51-478d-9c68-d05c964089d9] Running
	I0917 02:11:41.868484    4110 system_pods.go:61] "kube-vip-ha-857000" [84b805d8-9a8f-4c6f-b18f-76c98ca4776c] Running
	I0917 02:11:41.868488    4110 system_pods.go:61] "kube-vip-ha-857000-m02" [b194d3e4-3b4d-4075-a6b1-f05d219393a0] Running
	I0917 02:11:41.868490    4110 system_pods.go:61] "kube-vip-ha-857000-m03" [29be8efa-f99f-460a-80bd-ccc25f608e48] Running
	I0917 02:11:41.868493    4110 system_pods.go:61] "storage-provisioner" [d81e7b55-a14e-4dc7-9193-ebe6914cdacf] Running
	I0917 02:11:41.868498    4110 system_pods.go:74] duration metric: took 184.134673ms to wait for pod list to return data ...
	I0917 02:11:41.868509    4110 default_sa.go:34] waiting for default service account to be created ...
	I0917 02:11:42.057457    4110 request.go:632] Waited for 188.887232ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:11:42.057501    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:11:42.057507    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:42.057512    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:42.057516    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:42.060122    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:42.060299    4110 default_sa.go:45] found service account: "default"
	I0917 02:11:42.060314    4110 default_sa.go:55] duration metric: took 191.792113ms for default service account to be created ...
	I0917 02:11:42.060320    4110 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 02:11:42.257458    4110 request.go:632] Waited for 197.098839ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:42.257490    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:42.257495    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:42.257501    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:42.257506    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:42.261392    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:42.267316    4110 system_pods.go:86] 26 kube-system pods found
	I0917 02:11:42.267336    4110 system_pods.go:89] "coredns-7c65d6cfc9-fg65r" [1690cf49-3cd5-45ba-bcff-6c2947fb1bef] Running
	I0917 02:11:42.267340    4110 system_pods.go:89] "coredns-7c65d6cfc9-nl5j5" [dad0f1c6-0feb-4024-b0ee-95776c68bae8] Running
	I0917 02:11:42.267343    4110 system_pods.go:89] "etcd-ha-857000" [e9da37c5-ad0f-47e0-b65f-361d2cb9f204] Running
	I0917 02:11:42.267356    4110 system_pods.go:89] "etcd-ha-857000-m02" [f540f85a-9556-44c0-a560-188721a58bd5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 02:11:42.267362    4110 system_pods.go:89] "etcd-ha-857000-m03" [46cb6c7e-73e2-49b0-986f-c63a23ffa29d] Running
	I0917 02:11:42.267366    4110 system_pods.go:89] "kindnet-4jk9v" [24a018c6-9cbb-4d17-a295-8fef456534a0] Running
	I0917 02:11:42.267369    4110 system_pods.go:89] "kindnet-7pf7v" [eecd1421-3a2f-4e48-b2b2-abcbef7869e7] Running
	I0917 02:11:42.267372    4110 system_pods.go:89] "kindnet-vc6z5" [a92e41cb-74d1-4bd4-bffd-b807c859efd3] Running
	I0917 02:11:42.267377    4110 system_pods.go:89] "kindnet-vh2h2" [00d83b6a-e87e-4c36-b073-36e67c76d67d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 02:11:42.267380    4110 system_pods.go:89] "kube-apiserver-ha-857000" [b5451c9b-3be2-4454-bed6-fdc48031180e] Running
	I0917 02:11:42.267385    4110 system_pods.go:89] "kube-apiserver-ha-857000-m02" [d043a0e6-6ab2-47b6-bb82-ceff496f8336] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 02:11:42.267389    4110 system_pods.go:89] "kube-apiserver-ha-857000-m03" [b3d91c89-8830-4dd9-8c20-4d6f821a3d88] Running
	I0917 02:11:42.267392    4110 system_pods.go:89] "kube-controller-manager-ha-857000" [f4b0f56c-8d7d-4567-a4eb-088805c36c54] Running
	I0917 02:11:42.267398    4110 system_pods.go:89] "kube-controller-manager-ha-857000-m02" [16670a48-0dc2-4665-9b1d-5fc25337ec58] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 02:11:42.267402    4110 system_pods.go:89] "kube-controller-manager-ha-857000-m03" [754fdff4-0e27-4f32-ab12-3a6695924396] Running
	I0917 02:11:42.267405    4110 system_pods.go:89] "kube-proxy-528ht" [18b29dac-e4bf-4f26-988f-b1ba4019f9bc] Running
	I0917 02:11:42.267408    4110 system_pods.go:89] "kube-proxy-g9wxm" [a5b974f1-ed28-4f42-8c86-6fed0bf32317] Running
	I0917 02:11:42.267411    4110 system_pods.go:89] "kube-proxy-vskbj" [7a396757-8954-48d2-b708-dcdfbab21dc7] Running
	I0917 02:11:42.267415    4110 system_pods.go:89] "kube-proxy-zrqvr" [bb98af15-e15a-4904-b68d-2de3c6e5864c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 02:11:42.267419    4110 system_pods.go:89] "kube-scheduler-ha-857000" [fdce329c-0340-4e0a-8bc1-295e4970708b] Running
	I0917 02:11:42.267423    4110 system_pods.go:89] "kube-scheduler-ha-857000-m02" [274256f9-cd98-4dd3-befc-35443dcca033] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 02:11:42.267427    4110 system_pods.go:89] "kube-scheduler-ha-857000-m03" [74d4633a-1c51-478d-9c68-d05c964089d9] Running
	I0917 02:11:42.267436    4110 system_pods.go:89] "kube-vip-ha-857000" [84b805d8-9a8f-4c6f-b18f-76c98ca4776c] Running
	I0917 02:11:42.267438    4110 system_pods.go:89] "kube-vip-ha-857000-m02" [b194d3e4-3b4d-4075-a6b1-f05d219393a0] Running
	I0917 02:11:42.267441    4110 system_pods.go:89] "kube-vip-ha-857000-m03" [29be8efa-f99f-460a-80bd-ccc25f608e48] Running
	I0917 02:11:42.267444    4110 system_pods.go:89] "storage-provisioner" [d81e7b55-a14e-4dc7-9193-ebe6914cdacf] Running
	I0917 02:11:42.267448    4110 system_pods.go:126] duration metric: took 207.120728ms to wait for k8s-apps to be running ...
	I0917 02:11:42.267459    4110 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 02:11:42.267525    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:11:42.280323    4110 system_svc.go:56] duration metric: took 12.855514ms WaitForService to wait for kubelet
	I0917 02:11:42.280342    4110 kubeadm.go:582] duration metric: took 14.536306226s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:11:42.280356    4110 node_conditions.go:102] verifying NodePressure condition ...
	I0917 02:11:42.458901    4110 request.go:632] Waited for 178.497588ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0917 02:11:42.458965    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0917 02:11:42.458970    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:42.458975    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:42.458980    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:42.461607    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:42.462345    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:11:42.462358    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:11:42.462367    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:11:42.462370    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:11:42.462374    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:11:42.462377    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:11:42.462380    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:11:42.462383    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:11:42.462386    4110 node_conditions.go:105] duration metric: took 182.022805ms to run NodePressure ...
	I0917 02:11:42.462394    4110 start.go:241] waiting for startup goroutines ...
	I0917 02:11:42.462412    4110 start.go:255] writing updated cluster config ...
	I0917 02:11:42.484336    4110 out.go:201] 
	I0917 02:11:42.505774    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:42.505869    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:11:42.527331    4110 out.go:177] * Starting "ha-857000-m03" control-plane node in "ha-857000" cluster
	I0917 02:11:42.569515    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:11:42.569551    4110 cache.go:56] Caching tarball of preloaded images
	I0917 02:11:42.569751    4110 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:11:42.569769    4110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:11:42.569891    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:11:42.570622    4110 start.go:360] acquireMachinesLock for ha-857000-m03: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:11:42.570733    4110 start.go:364] duration metric: took 89.66µs to acquireMachinesLock for "ha-857000-m03"
	I0917 02:11:42.570758    4110 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:11:42.570766    4110 fix.go:54] fixHost starting: m03
	I0917 02:11:42.571203    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:11:42.571238    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:11:42.581037    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52144
	I0917 02:11:42.581469    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:11:42.581811    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:11:42.581822    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:11:42.582051    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:11:42.582209    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:42.582294    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetState
	I0917 02:11:42.582428    4110 main.go:141] libmachine: (ha-857000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:11:42.582545    4110 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid from json: 3442
	I0917 02:11:42.583498    4110 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid 3442 missing from process table
	I0917 02:11:42.583556    4110 fix.go:112] recreateIfNeeded on ha-857000-m03: state=Stopped err=<nil>
	I0917 02:11:42.583568    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	W0917 02:11:42.583655    4110 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:11:42.604438    4110 out.go:177] * Restarting existing hyperkit VM for "ha-857000-m03" ...
	I0917 02:11:42.678579    4110 main.go:141] libmachine: (ha-857000-m03) Calling .Start
	I0917 02:11:42.678864    4110 main.go:141] libmachine: (ha-857000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:11:42.678945    4110 main.go:141] libmachine: (ha-857000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/hyperkit.pid
	I0917 02:11:42.680796    4110 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid 3442 missing from process table
	I0917 02:11:42.680811    4110 main.go:141] libmachine: (ha-857000-m03) DBG | pid 3442 is in state "Stopped"
	I0917 02:11:42.680856    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/hyperkit.pid...
	I0917 02:11:42.681059    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Using UUID 3d8fdba9-dbf7-47ea-a80b-a24a99cad96e
	I0917 02:11:42.708058    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Generated MAC 16:4d:1d:5e:91:c8
	I0917 02:11:42.708080    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:11:42.708229    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3d8fdba9-dbf7-47ea-a80b-a24a99cad96e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000398c90)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:11:42.708256    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3d8fdba9-dbf7-47ea-a80b-a24a99cad96e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000398c90)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:11:42.708317    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3d8fdba9-dbf7-47ea-a80b-a24a99cad96e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/ha-857000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machine
s/ha-857000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:11:42.708369    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3d8fdba9-dbf7-47ea-a80b-a24a99cad96e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/ha-857000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:11:42.708386    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:11:42.710198    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: Pid is 4146
	I0917 02:11:42.710768    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Attempt 0
	I0917 02:11:42.710795    4110 main.go:141] libmachine: (ha-857000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:11:42.710847    4110 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid from json: 4146
	I0917 02:11:42.712907    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Searching for 16:4d:1d:5e:91:c8 in /var/db/dhcpd_leases ...
	I0917 02:11:42.712978    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:11:42.713009    4110 main.go:141] libmachine: (ha-857000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:11:42.713035    4110 main.go:141] libmachine: (ha-857000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:11:42.713060    4110 main.go:141] libmachine: (ha-857000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94661}
	I0917 02:11:42.713079    4110 main.go:141] libmachine: (ha-857000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9724}
	I0917 02:11:42.713098    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Found match: 16:4d:1d:5e:91:c8
	I0917 02:11:42.713110    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetConfigRaw
	I0917 02:11:42.713129    4110 main.go:141] libmachine: (ha-857000-m03) DBG | IP: 192.169.0.7
	I0917 02:11:42.713812    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetIP
	I0917 02:11:42.714067    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:11:42.714634    4110 machine.go:93] provisionDockerMachine start ...
	I0917 02:11:42.714648    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:42.714804    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:42.714912    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:42.715030    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:42.715172    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:42.715275    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:42.715462    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:42.715719    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:42.715729    4110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:11:42.719370    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:11:42.729567    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:11:42.730522    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:11:42.730552    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:11:42.730564    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:11:42.730573    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:11:43.130217    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:11:43.130237    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:11:43.246057    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:11:43.246080    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:11:43.246089    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:11:43.246096    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:11:43.246900    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:11:43.246909    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:11:48.954281    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:48 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:11:48.954379    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:48 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:11:48.954390    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:48 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:11:48.977816    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:48 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:11:53.786367    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:11:53.786383    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetMachineName
	I0917 02:11:53.786507    4110 buildroot.go:166] provisioning hostname "ha-857000-m03"
	I0917 02:11:53.786518    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetMachineName
	I0917 02:11:53.786619    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:53.786716    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:53.786814    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:53.786901    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:53.786991    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:53.787125    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:53.787256    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:53.787264    4110 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000-m03 && echo "ha-857000-m03" | sudo tee /etc/hostname
	I0917 02:11:53.860809    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000-m03
	
	I0917 02:11:53.860831    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:53.860995    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:53.861092    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:53.861199    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:53.861302    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:53.861448    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:53.861610    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:53.861621    4110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:11:53.932575    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:11:53.932592    4110 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:11:53.932604    4110 buildroot.go:174] setting up certificates
	I0917 02:11:53.932611    4110 provision.go:84] configureAuth start
	I0917 02:11:53.932618    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetMachineName
	I0917 02:11:53.932757    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetIP
	I0917 02:11:53.932853    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:53.932933    4110 provision.go:143] copyHostCerts
	I0917 02:11:53.932962    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:11:53.933012    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:11:53.933018    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:11:53.933153    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:11:53.933356    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:11:53.933385    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:11:53.933389    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:11:53.933461    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:11:53.933602    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:11:53.933640    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:11:53.933645    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:11:53.933711    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:11:53.933855    4110 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000-m03 san=[127.0.0.1 192.169.0.7 ha-857000-m03 localhost minikube]
	I0917 02:11:54.077333    4110 provision.go:177] copyRemoteCerts
	I0917 02:11:54.077392    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:11:54.077407    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:54.077544    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:54.077643    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.077738    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:54.077820    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	I0917 02:11:54.116797    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:11:54.116876    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 02:11:54.136202    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:11:54.136278    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:11:54.156340    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:11:54.156419    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:11:54.175630    4110 provision.go:87] duration metric: took 243.006586ms to configureAuth
	I0917 02:11:54.175645    4110 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:11:54.175825    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:54.175845    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:54.175978    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:54.176072    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:54.176183    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.176286    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.176390    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:54.176544    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:54.176682    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:54.176690    4110 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:11:54.238979    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:11:54.238993    4110 buildroot.go:70] root file system type: tmpfs
	I0917 02:11:54.239102    4110 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:11:54.239114    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:54.239249    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:54.239359    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.239453    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.239547    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:54.239702    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:54.239844    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:54.239889    4110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:11:54.314599    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:11:54.314621    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:54.314767    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:54.314854    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.314947    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.315024    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:54.315150    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:54.315292    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:54.315304    4110 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:11:55.935197    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:11:55.935211    4110 machine.go:96] duration metric: took 13.220338614s to provisionDockerMachine
	I0917 02:11:55.935219    4110 start.go:293] postStartSetup for "ha-857000-m03" (driver="hyperkit")
	I0917 02:11:55.935226    4110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:11:55.935240    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:55.935436    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:11:55.935456    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:55.935555    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:55.935640    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:55.935720    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:55.935796    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	I0917 02:11:55.975655    4110 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:11:55.982326    4110 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:11:55.982340    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:11:55.982439    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:11:55.982583    4110 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:11:55.982589    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:11:55.982752    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:11:55.995355    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:11:56.016063    4110 start.go:296] duration metric: took 80.833975ms for postStartSetup
	I0917 02:11:56.016085    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.016278    4110 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:11:56.016292    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:56.016390    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:56.016474    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.016549    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:56.016621    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	I0917 02:11:56.056575    4110 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:11:56.056644    4110 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:11:56.090435    4110 fix.go:56] duration metric: took 13.519431085s for fixHost
	I0917 02:11:56.090460    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:56.090600    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:56.090686    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.090776    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.090860    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:56.090993    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:56.091136    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:56.091142    4110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:11:56.155623    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726564316.081021180
	
	I0917 02:11:56.155639    4110 fix.go:216] guest clock: 1726564316.081021180
	I0917 02:11:56.155645    4110 fix.go:229] Guest: 2024-09-17 02:11:56.08102118 -0700 PDT Remote: 2024-09-17 02:11:56.09045 -0700 PDT m=+89.019475712 (delta=-9.42882ms)
	I0917 02:11:56.155656    4110 fix.go:200] guest clock delta is within tolerance: -9.42882ms
	I0917 02:11:56.155660    4110 start.go:83] releasing machines lock for "ha-857000-m03", held for 13.584681554s
	I0917 02:11:56.155677    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.155816    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetIP
	I0917 02:11:56.177120    4110 out.go:177] * Found network options:
	I0917 02:11:56.197056    4110 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0917 02:11:56.217835    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:11:56.217862    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:56.217881    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.218511    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.218685    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.218846    4110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0917 02:11:56.218876    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:56.218892    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	W0917 02:11:56.218898    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:56.219005    4110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:11:56.219024    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:56.219078    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:56.219246    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.219309    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:56.219439    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:56.219492    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.219585    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	I0917 02:11:56.219614    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:56.219751    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	W0917 02:11:56.256644    4110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:11:56.256720    4110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:11:56.309886    4110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:11:56.309904    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:11:56.309980    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:11:56.326165    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:11:56.334717    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:11:56.343026    4110 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:11:56.343079    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:11:56.351351    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:11:56.359978    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:11:56.368445    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:11:56.376813    4110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:11:56.385309    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:11:56.393895    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:11:56.402441    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:11:56.410891    4110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:11:56.418564    4110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:11:56.426298    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:56.529182    4110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:11:56.548629    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:11:56.548711    4110 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:11:56.564564    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:11:56.575668    4110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:11:56.592483    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:11:56.605747    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:11:56.616286    4110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:11:56.636099    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:11:56.646661    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:11:56.662025    4110 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:11:56.665163    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:11:56.672775    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:11:56.686783    4110 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:11:56.787618    4110 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:11:56.902014    4110 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:11:56.902043    4110 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:11:56.916683    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:57.010321    4110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:11:59.292351    4110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.28197073s)
	I0917 02:11:59.292423    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:11:59.302881    4110 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:11:59.315909    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:11:59.326097    4110 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:11:59.423622    4110 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:11:59.534194    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:59.650222    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:11:59.664197    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:11:59.675195    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:59.768785    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:11:59.834137    4110 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:11:59.834234    4110 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:11:59.838654    4110 start.go:563] Will wait 60s for crictl version
	I0917 02:11:59.838726    4110 ssh_runner.go:195] Run: which crictl
	I0917 02:11:59.844060    4110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:11:59.874850    4110 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:11:59.874944    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:11:59.893142    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:11:59.934010    4110 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:11:59.974908    4110 out.go:177]   - env NO_PROXY=192.169.0.5
	I0917 02:11:59.996010    4110 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0917 02:12:00.016678    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetIP
	I0917 02:12:00.016979    4110 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:12:00.020450    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:12:00.029942    4110 mustload.go:65] Loading cluster: ha-857000
	I0917 02:12:00.030121    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:00.030345    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:00.030368    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:00.039149    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52286
	I0917 02:12:00.039489    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:00.039838    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:00.039856    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:00.040084    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:00.040206    4110 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:12:00.040304    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:00.040367    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 4124
	I0917 02:12:00.041347    4110 host.go:66] Checking if "ha-857000" exists ...
	I0917 02:12:00.041604    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:00.041629    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:00.050248    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52288
	I0917 02:12:00.050590    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:00.050943    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:00.050963    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:00.051142    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:00.051249    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:12:00.051358    4110 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.7
	I0917 02:12:00.051364    4110 certs.go:194] generating shared ca certs ...
	I0917 02:12:00.051373    4110 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:12:00.051518    4110 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:12:00.051569    4110 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:12:00.051578    4110 certs.go:256] generating profile certs ...
	I0917 02:12:00.051672    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key
	I0917 02:12:00.051762    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.daf177bc
	I0917 02:12:00.051812    4110 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key
	I0917 02:12:00.051819    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:12:00.051841    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:12:00.051859    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:12:00.051878    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:12:00.051895    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 02:12:00.051919    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 02:12:00.051943    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 02:12:00.051962    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 02:12:00.052037    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:12:00.052085    4110 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:12:00.052093    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:12:00.052128    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:12:00.052160    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:12:00.052188    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:12:00.052263    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:12:00.052296    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:12:00.052317    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:00.052334    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:12:00.052362    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:12:00.052450    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:12:00.052535    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:12:00.052624    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:12:00.052722    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:12:00.080096    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 02:12:00.083244    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 02:12:00.090969    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 02:12:00.094112    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 02:12:00.101834    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 02:12:00.104986    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 02:12:00.113430    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 02:12:00.116712    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 02:12:00.124546    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 02:12:00.127709    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 02:12:00.135587    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 02:12:00.138750    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 02:12:00.147884    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:12:00.168533    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:12:00.188900    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:12:00.208781    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:12:00.229275    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 02:12:00.248994    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 02:12:00.269569    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:12:00.289646    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 02:12:00.309509    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:12:00.329488    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:12:00.349487    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:12:00.369414    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 02:12:00.383327    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 02:12:00.396803    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 02:12:00.410693    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 02:12:00.424533    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 02:12:00.438144    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 02:12:00.451710    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 02:12:00.465698    4110 ssh_runner.go:195] Run: openssl version
	I0917 02:12:00.470190    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:12:00.478670    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:12:00.482005    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:12:00.482051    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:12:00.486183    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:12:00.494427    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:12:00.503098    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:00.506593    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:00.506643    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:00.510950    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:12:00.519387    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:12:00.527796    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:12:00.531174    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:12:00.531231    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:12:00.535528    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:12:00.543734    4110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:12:00.547058    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:12:00.551336    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:12:00.555666    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:12:00.560095    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:12:00.564671    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:12:00.568907    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:12:00.573116    4110 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.1 docker true true} ...
	I0917 02:12:00.573181    4110 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:12:00.573213    4110 kube-vip.go:115] generating kube-vip config ...
	I0917 02:12:00.573252    4110 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 02:12:00.585709    4110 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 02:12:00.585750    4110 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 02:12:00.585815    4110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:12:00.593621    4110 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:12:00.593672    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 02:12:00.600967    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 02:12:00.614925    4110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:12:00.628761    4110 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 02:12:00.642265    4110 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:12:00.645102    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:12:00.654336    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:00.752482    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:12:00.767122    4110 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:12:00.767316    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:00.788252    4110 out.go:177] * Verifying Kubernetes components...
	I0917 02:12:00.808843    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:00.927434    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:12:00.944321    4110 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:12:00.944565    4110 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x673a720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 02:12:00.944614    4110 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0917 02:12:00.944789    4110 node_ready.go:35] waiting up to 6m0s for node "ha-857000-m03" to be "Ready" ...
	I0917 02:12:00.944851    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:00.944858    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.944867    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.944872    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.946764    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:00.947061    4110 node_ready.go:49] node "ha-857000-m03" has status "Ready":"True"
	I0917 02:12:00.947072    4110 node_ready.go:38] duration metric: took 2.273862ms for node "ha-857000-m03" to be "Ready" ...
	I0917 02:12:00.947078    4110 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:12:00.947127    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:00.947133    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.947139    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.947143    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.950970    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:00.956449    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.956504    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:00.956511    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.956518    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.956526    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.959279    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.959653    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:00.959660    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.959666    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.959669    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.961657    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:00.962160    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:00.962170    4110 pod_ready.go:82] duration metric: took 5.706294ms for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.962176    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.962215    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nl5j5
	I0917 02:12:00.962221    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.962226    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.962230    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.966635    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:00.967113    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:00.967122    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.967128    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.967131    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.969231    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.969585    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:00.969594    4110 pod_ready.go:82] duration metric: took 7.413149ms for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.969601    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.969645    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000
	I0917 02:12:00.969650    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.969655    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.969659    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.971799    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.972247    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:00.972254    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.972264    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.972267    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.974411    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.974879    4110 pod_ready.go:93] pod "etcd-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:00.974888    4110 pod_ready.go:82] duration metric: took 5.282457ms for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.974895    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.974931    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m02
	I0917 02:12:00.974936    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.974941    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.974945    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.977288    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.977952    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:00.977959    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.977964    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.977966    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.980610    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.981051    4110 pod_ready.go:93] pod "etcd-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:00.981061    4110 pod_ready.go:82] duration metric: took 6.161283ms for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.981068    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.146340    4110 request.go:632] Waited for 165.222252ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m03
	I0917 02:12:01.146408    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m03
	I0917 02:12:01.146414    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.146420    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.146423    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.148663    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:01.345119    4110 request.go:632] Waited for 196.038973ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:01.345177    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:01.345186    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.345198    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.345210    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.348611    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:01.349143    4110 pod_ready.go:93] pod "etcd-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:01.349154    4110 pod_ready.go:82] duration metric: took 368.067559ms for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.349166    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.545007    4110 request.go:632] Waited for 195.782486ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:12:01.545050    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:12:01.545055    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.545061    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.545066    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.547602    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:01.745603    4110 request.go:632] Waited for 197.630153ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:01.745661    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:01.745667    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.745673    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.745676    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.748299    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:01.748902    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:01.748919    4110 pod_ready.go:82] duration metric: took 399.734114ms for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.748926    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.945883    4110 request.go:632] Waited for 196.866004ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:12:01.945945    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:12:01.945954    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.945964    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.945969    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.951958    4110 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 02:12:02.145413    4110 request.go:632] Waited for 192.798684ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:02.145468    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:02.145478    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.145511    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.145520    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.148357    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:02.149190    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:02.149203    4110 pod_ready.go:82] duration metric: took 400.265258ms for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:02.149211    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:02.345683    4110 request.go:632] Waited for 196.426528ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:02.345728    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:02.345736    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.345744    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.345751    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.348508    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:02.544925    4110 request.go:632] Waited for 196.020856ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:02.544994    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:02.545000    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.545006    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.545009    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.547483    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:02.744993    4110 request.go:632] Waited for 95.563815ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:02.745048    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:02.745054    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.745061    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.745065    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.747122    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:02.945441    4110 request.go:632] Waited for 197.559126ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:02.945475    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:02.945480    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.945486    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.945491    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.948036    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:03.150936    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:03.150968    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:03.150975    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:03.150980    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:03.153272    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:03.346424    4110 request.go:632] Waited for 192.442992ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:03.346514    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:03.346521    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:03.346528    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:03.346533    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:03.350998    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:03.649774    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:03.649809    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:03.649818    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:03.649823    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:03.652931    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:03.744972    4110 request.go:632] Waited for 90.967061ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:03.745023    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:03.745029    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:03.745034    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:03.745039    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:03.747431    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:04.149979    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:04.150024    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:04.150033    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:04.150037    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:04.153328    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:04.153812    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:04.153822    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:04.153828    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:04.153832    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:04.156074    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:04.156716    4110 pod_ready.go:103] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:04.650904    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:04.650924    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:04.650931    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:04.650946    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:04.653820    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:04.654378    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:04.654386    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:04.654393    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:04.654396    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:04.656654    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:05.151431    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:05.151485    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:05.151499    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:05.151506    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:05.154809    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:05.155323    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:05.155331    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:05.155337    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:05.155340    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:05.156965    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:05.650343    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:05.650367    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:05.650413    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:05.650421    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:05.653876    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:05.654508    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:05.654516    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:05.654522    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:05.654525    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:05.656260    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:06.149952    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:06.149982    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:06.149989    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:06.149994    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:06.152142    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:06.152594    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:06.152602    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:06.152608    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:06.152611    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:06.154378    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:06.650007    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:06.650040    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:06.650049    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:06.650053    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:06.652517    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:06.653131    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:06.653138    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:06.653144    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:06.653148    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:06.655153    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:06.655511    4110 pod_ready.go:103] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:07.150612    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:07.150642    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:07.150678    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:07.150687    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:07.153805    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:07.154498    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:07.154508    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:07.154516    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:07.154521    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:07.156264    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:07.650356    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:07.650381    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:07.650392    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:07.650401    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:07.653535    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:07.653958    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:07.653966    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:07.653972    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:07.653975    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:07.656337    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:08.150386    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:08.150440    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:08.150452    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:08.150460    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:08.153584    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:08.155108    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:08.155123    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:08.155132    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:08.155143    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:08.157038    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:08.650349    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:08.650377    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:08.650389    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:08.650398    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:08.654034    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:08.654828    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:08.654836    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:08.654843    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:08.654846    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:08.656625    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:08.656928    4110 pod_ready.go:103] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:09.151423    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:09.151447    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:09.151459    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:09.151464    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:09.154460    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:09.154947    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:09.154956    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:09.154961    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:09.154966    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:09.156555    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:09.650477    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:09.650503    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:09.650554    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:09.650568    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:09.653583    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:09.653960    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:09.653967    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:09.653973    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:09.653983    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:09.655828    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:10.149696    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:10.149720    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:10.149732    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:10.149739    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:10.153151    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:10.153716    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:10.153726    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:10.153734    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:10.153739    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:10.155758    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:10.649780    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:10.649830    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:10.649844    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:10.649854    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:10.653210    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:10.653938    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:10.653945    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:10.653951    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:10.653956    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:10.655718    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:11.149497    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:11.149512    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:11.149525    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:11.149530    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:11.151647    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:11.152174    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:11.152181    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:11.152187    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:11.152189    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:11.154098    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:11.154423    4110 pod_ready.go:103] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:11.650969    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:11.650998    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:11.651032    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:11.651039    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:11.654171    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:11.654962    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:11.654969    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:11.654975    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:11.654979    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:11.656692    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.150871    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:12.150884    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.150890    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.150893    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.153079    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:12.153733    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:12.153741    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.153747    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.153751    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.155608    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.650611    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:12.650636    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.650674    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.650684    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.654409    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:12.654934    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:12.654941    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.654951    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.654954    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.656676    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.657136    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:12.657145    4110 pod_ready.go:82] duration metric: took 10.507747852s for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.657152    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.657184    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:12:12.657189    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.657194    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.657198    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.658893    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.659304    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:12.659312    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.659317    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.659321    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.660920    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.661222    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:12.661230    4110 pod_ready.go:82] duration metric: took 4.073163ms for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.661237    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.661269    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:12:12.661274    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.661279    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.661282    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.662821    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.663178    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:12.663186    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.663192    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.663195    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.664635    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.665084    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:12.665092    4110 pod_ready.go:82] duration metric: took 3.849688ms for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.665098    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.665131    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:12.665136    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.665142    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.665157    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.666924    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.667551    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:12.667558    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.667564    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.667566    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.669116    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:13.165275    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:13.165342    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:13.165359    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:13.165367    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:13.168538    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:13.169042    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:13.169049    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:13.169054    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:13.169059    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:13.170903    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:13.665896    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:13.665914    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:13.665923    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:13.665930    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:13.668510    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:13.669059    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:13.669066    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:13.669071    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:13.669074    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:13.670842    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:14.165888    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:14.165910    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:14.165935    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:14.165941    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:14.168473    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:14.169111    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:14.169118    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:14.169124    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:14.169137    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:14.170994    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:14.667072    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:14.667128    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:14.667140    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:14.667151    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:14.670650    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:14.671210    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:14.671217    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:14.671222    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:14.671226    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:14.672859    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:14.673218    4110 pod_ready.go:103] pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:15.165335    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:15.165362    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:15.165375    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:15.165382    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:15.169212    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:15.169615    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:15.169623    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:15.169629    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:15.169633    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:15.171395    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:15.665422    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:15.665483    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:15.665498    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:15.665505    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:15.667889    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:15.668348    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:15.668356    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:15.668364    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:15.668369    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:15.670115    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.166085    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:16.166134    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.166147    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.166156    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.168879    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:16.169423    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:16.169430    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.169439    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.169442    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.171016    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.666749    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:16.666767    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.666797    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.666802    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.669480    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:16.669826    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:16.669832    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.669838    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.669842    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.671504    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.671930    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:16.671939    4110 pod_ready.go:82] duration metric: took 4.006767511s for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.671955    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.671990    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:12:16.671995    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.672000    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.672005    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.673862    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.674451    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:12:16.674459    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.674464    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.674468    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.676355    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.676667    4110 pod_ready.go:93] pod "kube-proxy-528ht" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:16.676675    4110 pod_ready.go:82] duration metric: took 4.715112ms for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.676682    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.676724    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:12:16.676729    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.676734    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.676738    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.678611    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.678986    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:16.678993    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.678999    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.679003    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.680713    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.681034    4110 pod_ready.go:93] pod "kube-proxy-g9wxm" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:16.681043    4110 pod_ready.go:82] duration metric: took 4.356651ms for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.681050    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.681091    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:12:16.681097    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.681102    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.681106    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.682940    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.683445    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:16.683452    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.683458    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.683462    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.685017    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.685461    4110 pod_ready.go:93] pod "kube-proxy-vskbj" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:16.685470    4110 pod_ready.go:82] duration metric: took 4.414596ms for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.685478    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.851971    4110 request.go:632] Waited for 166.418009ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:12:16.852035    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:12:16.852064    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.852076    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.852084    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.855683    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:17.050985    4110 request.go:632] Waited for 194.718198ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:17.051089    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:17.051098    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.051110    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.051119    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.054384    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:17.054876    4110 pod_ready.go:93] pod "kube-proxy-zrqvr" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:17.054889    4110 pod_ready.go:82] duration metric: took 369.398412ms for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.054898    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.250755    4110 request.go:632] Waited for 195.811261ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:12:17.250805    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:12:17.250817    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.250830    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.250841    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.291380    4110 round_trippers.go:574] Response Status: 200 OK in 40 milliseconds
	I0917 02:12:17.450914    4110 request.go:632] Waited for 157.443488ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:17.450956    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:17.450990    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.450996    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.450999    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.455828    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:17.456276    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:17.456286    4110 pod_ready.go:82] duration metric: took 401.376038ms for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.456294    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.651418    4110 request.go:632] Waited for 195.082221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:12:17.651455    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:12:17.651461    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.651471    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.651495    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.668422    4110 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0917 02:12:17.850764    4110 request.go:632] Waited for 181.996065ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:17.850819    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:17.850825    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.850832    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.850836    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.857947    4110 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 02:12:17.858420    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:17.858431    4110 pod_ready.go:82] duration metric: took 402.124989ms for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.858439    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:18.051442    4110 request.go:632] Waited for 192.93696ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:12:18.051483    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:12:18.051491    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.051499    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.051512    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.054127    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:18.250926    4110 request.go:632] Waited for 196.199352ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:18.250961    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:18.250966    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.251003    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.251008    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.274920    4110 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0917 02:12:18.275585    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:18.275595    4110 pod_ready.go:82] duration metric: took 417.143356ms for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:18.275606    4110 pod_ready.go:39] duration metric: took 17.328217726s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:12:18.275618    4110 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:12:18.275688    4110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:12:18.289040    4110 api_server.go:72] duration metric: took 17.521587147s to wait for apiserver process to appear ...
	I0917 02:12:18.289060    4110 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:12:18.289072    4110 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0917 02:12:18.292824    4110 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0917 02:12:18.292862    4110 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0917 02:12:18.292866    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.292872    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.292879    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.294137    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:18.294247    4110 api_server.go:141] control plane version: v1.31.1
	I0917 02:12:18.294257    4110 api_server.go:131] duration metric: took 5.192363ms to wait for apiserver health ...
	I0917 02:12:18.294263    4110 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 02:12:18.451185    4110 request.go:632] Waited for 156.882548ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:18.451216    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:18.451222    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.451248    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.451254    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.490169    4110 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0917 02:12:18.505194    4110 system_pods.go:59] 26 kube-system pods found
	I0917 02:12:18.505219    4110 system_pods.go:61] "coredns-7c65d6cfc9-fg65r" [1690cf49-3cd5-45ba-bcff-6c2947fb1bef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 02:12:18.505226    4110 system_pods.go:61] "coredns-7c65d6cfc9-nl5j5" [dad0f1c6-0feb-4024-b0ee-95776c68bae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 02:12:18.505231    4110 system_pods.go:61] "etcd-ha-857000" [e9da37c5-ad0f-47e0-b65f-361d2cb9f204] Running
	I0917 02:12:18.505234    4110 system_pods.go:61] "etcd-ha-857000-m02" [f540f85a-9556-44c0-a560-188721a58bd5] Running
	I0917 02:12:18.505237    4110 system_pods.go:61] "etcd-ha-857000-m03" [46cb6c7e-73e2-49b0-986f-c63a23ffa29d] Running
	I0917 02:12:18.505240    4110 system_pods.go:61] "kindnet-4jk9v" [24a018c6-9cbb-4d17-a295-8fef456534a0] Running
	I0917 02:12:18.505244    4110 system_pods.go:61] "kindnet-7pf7v" [eecd1421-3a2f-4e48-b2b2-abcbef7869e7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 02:12:18.505247    4110 system_pods.go:61] "kindnet-vc6z5" [a92e41cb-74d1-4bd4-bffd-b807c859efd3] Running
	I0917 02:12:18.505250    4110 system_pods.go:61] "kindnet-vh2h2" [00d83b6a-e87e-4c36-b073-36e67c76d67d] Running
	I0917 02:12:18.505273    4110 system_pods.go:61] "kube-apiserver-ha-857000" [b5451c9b-3be2-4454-bed6-fdc48031180e] Running
	I0917 02:12:18.505282    4110 system_pods.go:61] "kube-apiserver-ha-857000-m02" [d043a0e6-6ab2-47b6-bb82-ceff496f8336] Running
	I0917 02:12:18.505290    4110 system_pods.go:61] "kube-apiserver-ha-857000-m03" [b3d91c89-8830-4dd9-8c20-4d6f821a3d88] Running
	I0917 02:12:18.505313    4110 system_pods.go:61] "kube-controller-manager-ha-857000" [f4b0f56c-8d7d-4567-a4eb-088805c36c54] Running
	I0917 02:12:18.505323    4110 system_pods.go:61] "kube-controller-manager-ha-857000-m02" [16670a48-0dc2-4665-9b1d-5fc25337ec58] Running
	I0917 02:12:18.505338    4110 system_pods.go:61] "kube-controller-manager-ha-857000-m03" [754fdff4-0e27-4f32-ab12-3a6695924396] Running
	I0917 02:12:18.505343    4110 system_pods.go:61] "kube-proxy-528ht" [18b29dac-e4bf-4f26-988f-b1ba4019f9bc] Running
	I0917 02:12:18.505351    4110 system_pods.go:61] "kube-proxy-g9wxm" [a5b974f1-ed28-4f42-8c86-6fed0bf32317] Running
	I0917 02:12:18.505361    4110 system_pods.go:61] "kube-proxy-vskbj" [7a396757-8954-48d2-b708-dcdfbab21dc7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 02:12:18.505367    4110 system_pods.go:61] "kube-proxy-zrqvr" [bb98af15-e15a-4904-b68d-2de3c6e5864c] Running
	I0917 02:12:18.505373    4110 system_pods.go:61] "kube-scheduler-ha-857000" [fdce329c-0340-4e0a-8bc1-295e4970708b] Running
	I0917 02:12:18.505378    4110 system_pods.go:61] "kube-scheduler-ha-857000-m02" [274256f9-cd98-4dd3-befc-35443dcca033] Running
	I0917 02:12:18.505384    4110 system_pods.go:61] "kube-scheduler-ha-857000-m03" [74d4633a-1c51-478d-9c68-d05c964089d9] Running
	I0917 02:12:18.505388    4110 system_pods.go:61] "kube-vip-ha-857000" [84b805d8-9a8f-4c6f-b18f-76c98ca4776c] Running
	I0917 02:12:18.505392    4110 system_pods.go:61] "kube-vip-ha-857000-m02" [b194d3e4-3b4d-4075-a6b1-f05d219393a0] Running
	I0917 02:12:18.505396    4110 system_pods.go:61] "kube-vip-ha-857000-m03" [29be8efa-f99f-460a-80bd-ccc25f608e48] Running
	I0917 02:12:18.505399    4110 system_pods.go:61] "storage-provisioner" [d81e7b55-a14e-4dc7-9193-ebe6914cdacf] Running
	I0917 02:12:18.505406    4110 system_pods.go:74] duration metric: took 211.134036ms to wait for pod list to return data ...
	I0917 02:12:18.505413    4110 default_sa.go:34] waiting for default service account to be created ...
	I0917 02:12:18.650733    4110 request.go:632] Waited for 145.255733ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:12:18.650776    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:12:18.650782    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.650793    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.650798    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.659108    4110 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 02:12:18.659203    4110 default_sa.go:45] found service account: "default"
	I0917 02:12:18.659217    4110 default_sa.go:55] duration metric: took 153.795915ms for default service account to be created ...
	I0917 02:12:18.659227    4110 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 02:12:18.851528    4110 request.go:632] Waited for 192.225662ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:18.851585    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:18.851591    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.851597    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.851600    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.855716    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:18.861599    4110 system_pods.go:86] 26 kube-system pods found
	I0917 02:12:18.861618    4110 system_pods.go:89] "coredns-7c65d6cfc9-fg65r" [1690cf49-3cd5-45ba-bcff-6c2947fb1bef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 02:12:18.861630    4110 system_pods.go:89] "coredns-7c65d6cfc9-nl5j5" [dad0f1c6-0feb-4024-b0ee-95776c68bae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 02:12:18.861635    4110 system_pods.go:89] "etcd-ha-857000" [e9da37c5-ad0f-47e0-b65f-361d2cb9f204] Running
	I0917 02:12:18.861638    4110 system_pods.go:89] "etcd-ha-857000-m02" [f540f85a-9556-44c0-a560-188721a58bd5] Running
	I0917 02:12:18.861642    4110 system_pods.go:89] "etcd-ha-857000-m03" [46cb6c7e-73e2-49b0-986f-c63a23ffa29d] Running
	I0917 02:12:18.861645    4110 system_pods.go:89] "kindnet-4jk9v" [24a018c6-9cbb-4d17-a295-8fef456534a0] Running
	I0917 02:12:18.861649    4110 system_pods.go:89] "kindnet-7pf7v" [eecd1421-3a2f-4e48-b2b2-abcbef7869e7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 02:12:18.861653    4110 system_pods.go:89] "kindnet-vc6z5" [a92e41cb-74d1-4bd4-bffd-b807c859efd3] Running
	I0917 02:12:18.861657    4110 system_pods.go:89] "kindnet-vh2h2" [00d83b6a-e87e-4c36-b073-36e67c76d67d] Running
	I0917 02:12:18.861660    4110 system_pods.go:89] "kube-apiserver-ha-857000" [b5451c9b-3be2-4454-bed6-fdc48031180e] Running
	I0917 02:12:18.861663    4110 system_pods.go:89] "kube-apiserver-ha-857000-m02" [d043a0e6-6ab2-47b6-bb82-ceff496f8336] Running
	I0917 02:12:18.861666    4110 system_pods.go:89] "kube-apiserver-ha-857000-m03" [b3d91c89-8830-4dd9-8c20-4d6f821a3d88] Running
	I0917 02:12:18.861670    4110 system_pods.go:89] "kube-controller-manager-ha-857000" [f4b0f56c-8d7d-4567-a4eb-088805c36c54] Running
	I0917 02:12:18.861673    4110 system_pods.go:89] "kube-controller-manager-ha-857000-m02" [16670a48-0dc2-4665-9b1d-5fc25337ec58] Running
	I0917 02:12:18.861677    4110 system_pods.go:89] "kube-controller-manager-ha-857000-m03" [754fdff4-0e27-4f32-ab12-3a6695924396] Running
	I0917 02:12:18.861682    4110 system_pods.go:89] "kube-proxy-528ht" [18b29dac-e4bf-4f26-988f-b1ba4019f9bc] Running
	I0917 02:12:18.861685    4110 system_pods.go:89] "kube-proxy-g9wxm" [a5b974f1-ed28-4f42-8c86-6fed0bf32317] Running
	I0917 02:12:18.861690    4110 system_pods.go:89] "kube-proxy-vskbj" [7a396757-8954-48d2-b708-dcdfbab21dc7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 02:12:18.861694    4110 system_pods.go:89] "kube-proxy-zrqvr" [bb98af15-e15a-4904-b68d-2de3c6e5864c] Running
	I0917 02:12:18.861698    4110 system_pods.go:89] "kube-scheduler-ha-857000" [fdce329c-0340-4e0a-8bc1-295e4970708b] Running
	I0917 02:12:18.861701    4110 system_pods.go:89] "kube-scheduler-ha-857000-m02" [274256f9-cd98-4dd3-befc-35443dcca033] Running
	I0917 02:12:18.861704    4110 system_pods.go:89] "kube-scheduler-ha-857000-m03" [74d4633a-1c51-478d-9c68-d05c964089d9] Running
	I0917 02:12:18.861707    4110 system_pods.go:89] "kube-vip-ha-857000" [c577f2f1-ab99-4fbe-acc1-516a135f0377] Pending
	I0917 02:12:18.861710    4110 system_pods.go:89] "kube-vip-ha-857000-m02" [b194d3e4-3b4d-4075-a6b1-f05d219393a0] Running
	I0917 02:12:18.861713    4110 system_pods.go:89] "kube-vip-ha-857000-m03" [29be8efa-f99f-460a-80bd-ccc25f608e48] Running
	I0917 02:12:18.861715    4110 system_pods.go:89] "storage-provisioner" [d81e7b55-a14e-4dc7-9193-ebe6914cdacf] Running
	I0917 02:12:18.861720    4110 system_pods.go:126] duration metric: took 202.461636ms to wait for k8s-apps to be running ...
	I0917 02:12:18.861726    4110 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 02:12:18.861778    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:12:18.882032    4110 system_svc.go:56] duration metric: took 20.298661ms WaitForService to wait for kubelet
	I0917 02:12:18.882059    4110 kubeadm.go:582] duration metric: took 18.114595178s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:12:18.882083    4110 node_conditions.go:102] verifying NodePressure condition ...
	I0917 02:12:19.052878    4110 request.go:632] Waited for 170.643294ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0917 02:12:19.052938    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0917 02:12:19.052951    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:19.052966    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:19.052976    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:19.057011    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:19.057806    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:12:19.057817    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:12:19.057824    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:12:19.057827    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:12:19.057830    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:12:19.057834    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:12:19.057837    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:12:19.057840    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:12:19.057843    4110 node_conditions.go:105] duration metric: took 175.740836ms to run NodePressure ...
	I0917 02:12:19.057851    4110 start.go:241] waiting for startup goroutines ...
	I0917 02:12:19.057867    4110 start.go:255] writing updated cluster config ...
	I0917 02:12:19.079978    4110 out.go:201] 
	I0917 02:12:19.117280    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:19.117377    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:12:19.138898    4110 out.go:177] * Starting "ha-857000-m04" worker node in "ha-857000" cluster
	I0917 02:12:19.180945    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:12:19.180969    4110 cache.go:56] Caching tarball of preloaded images
	I0917 02:12:19.181086    4110 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:12:19.181097    4110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:12:19.181167    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:12:19.181757    4110 start.go:360] acquireMachinesLock for ha-857000-m04: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:12:19.181807    4110 start.go:364] duration metric: took 37.353µs to acquireMachinesLock for "ha-857000-m04"
	I0917 02:12:19.181825    4110 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:12:19.181830    4110 fix.go:54] fixHost starting: m04
	I0917 02:12:19.182086    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:19.182106    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:19.191065    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52292
	I0917 02:12:19.191452    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:19.191850    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:19.191867    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:19.192069    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:19.192186    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:19.192279    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetState
	I0917 02:12:19.192404    4110 main.go:141] libmachine: (ha-857000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:19.192500    4110 main.go:141] libmachine: (ha-857000-m04) DBG | hyperkit pid from json: 3550
	I0917 02:12:19.193450    4110 main.go:141] libmachine: (ha-857000-m04) DBG | hyperkit pid 3550 missing from process table
	I0917 02:12:19.193488    4110 fix.go:112] recreateIfNeeded on ha-857000-m04: state=Stopped err=<nil>
	I0917 02:12:19.193498    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	W0917 02:12:19.193587    4110 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:12:19.214824    4110 out.go:177] * Restarting existing hyperkit VM for "ha-857000-m04" ...
	I0917 02:12:19.289023    4110 main.go:141] libmachine: (ha-857000-m04) Calling .Start
	I0917 02:12:19.289295    4110 main.go:141] libmachine: (ha-857000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:19.289356    4110 main.go:141] libmachine: (ha-857000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/hyperkit.pid
	I0917 02:12:19.289453    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Using UUID 32bc812d-06ce-423b-90a4-5417ea5ec912
	I0917 02:12:19.319068    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Generated MAC a:b6:8:34:25:a6
	I0917 02:12:19.319111    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:12:19.319291    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"32bc812d-06ce-423b-90a4-5417ea5ec912", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000289980)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:12:19.319339    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"32bc812d-06ce-423b-90a4-5417ea5ec912", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000289980)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:12:19.319395    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "32bc812d-06ce-423b-90a4-5417ea5ec912", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/ha-857000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machine
s/ha-857000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:12:19.319498    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 32bc812d-06ce-423b-90a4-5417ea5ec912 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/ha-857000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:12:19.319538    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:12:19.321260    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: Pid is 4161
	I0917 02:12:19.321886    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Attempt 0
	I0917 02:12:19.321908    4110 main.go:141] libmachine: (ha-857000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:19.321989    4110 main.go:141] libmachine: (ha-857000-m04) DBG | hyperkit pid from json: 4161
	I0917 02:12:19.324366    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Searching for a:b6:8:34:25:a6 in /var/db/dhcpd_leases ...
	I0917 02:12:19.324461    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:12:19.324494    4110 main.go:141] libmachine: (ha-857000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:12:19.324519    4110 main.go:141] libmachine: (ha-857000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:12:19.324537    4110 main.go:141] libmachine: (ha-857000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:12:19.324552    4110 main.go:141] libmachine: (ha-857000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94661}
	I0917 02:12:19.324565    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Found match: a:b6:8:34:25:a6
	I0917 02:12:19.324580    4110 main.go:141] libmachine: (ha-857000-m04) DBG | IP: 192.169.0.8
	I0917 02:12:19.324586    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetConfigRaw
	I0917 02:12:19.325317    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetIP
	I0917 02:12:19.325565    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:12:19.326089    4110 machine.go:93] provisionDockerMachine start ...
	I0917 02:12:19.326109    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:19.326263    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:19.326401    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:19.326560    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:19.326727    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:19.326852    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:19.327048    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:19.327215    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:19.327223    4110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:12:19.329900    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:12:19.339917    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:12:19.340861    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:12:19.340880    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:12:19.340887    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:12:19.340906    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:12:19.732737    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:12:19.732752    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:12:19.847625    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:12:19.847643    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:12:19.847688    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:12:19.847715    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:12:19.848483    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:12:19.848501    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:12:25.591852    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:12:25.591915    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:12:25.591925    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:12:25.615174    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:12:29.572071    4110 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.8:22: connect: connection refused
	I0917 02:12:32.627647    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:12:32.627664    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetMachineName
	I0917 02:12:32.627799    4110 buildroot.go:166] provisioning hostname "ha-857000-m04"
	I0917 02:12:32.627808    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetMachineName
	I0917 02:12:32.627920    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.628014    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:32.628110    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.628210    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.628294    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:32.628431    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:32.628580    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:32.628587    4110 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000-m04 && echo "ha-857000-m04" | sudo tee /etc/hostname
	I0917 02:12:32.692963    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000-m04
	
	I0917 02:12:32.692980    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.693102    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:32.693193    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.693281    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.693375    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:32.693517    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:32.693670    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:32.693680    4110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:12:32.753597    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:12:32.753613    4110 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:12:32.753629    4110 buildroot.go:174] setting up certificates
	I0917 02:12:32.753635    4110 provision.go:84] configureAuth start
	I0917 02:12:32.753642    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetMachineName
	I0917 02:12:32.753783    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetIP
	I0917 02:12:32.753886    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.753973    4110 provision.go:143] copyHostCerts
	I0917 02:12:32.754002    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:12:32.754055    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:12:32.754061    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:12:32.754199    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:12:32.754425    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:12:32.754455    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:12:32.754465    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:12:32.754535    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:12:32.754684    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:12:32.754713    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:12:32.754717    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:12:32.754781    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:12:32.754925    4110 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000-m04 san=[127.0.0.1 192.169.0.8 ha-857000-m04 localhost minikube]
	I0917 02:12:32.886815    4110 provision.go:177] copyRemoteCerts
	I0917 02:12:32.886883    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:12:32.886900    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.887049    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:32.887156    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.887265    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:32.887345    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	I0917 02:12:32.921412    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:12:32.921483    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:12:32.942093    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:12:32.942165    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 02:12:32.962202    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:12:32.962278    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:12:32.982539    4110 provision.go:87] duration metric: took 228.892121ms to configureAuth
	I0917 02:12:32.982555    4110 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:12:32.982734    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:32.982747    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:32.982882    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.982965    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:32.983053    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.983146    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.983222    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:32.983341    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:32.983471    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:32.983479    4110 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:12:33.039112    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:12:33.039126    4110 buildroot.go:70] root file system type: tmpfs
	I0917 02:12:33.039209    4110 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:12:33.039225    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:33.039356    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:33.039463    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:33.039553    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:33.039642    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:33.039765    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:33.039901    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:33.039948    4110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:12:33.105290    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:12:33.105311    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:33.105463    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:33.105557    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:33.105679    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:33.105803    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:33.106006    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:33.106166    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:33.106179    4110 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:12:34.690044    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:12:34.690061    4110 machine.go:96] duration metric: took 15.363692529s to provisionDockerMachine
	I0917 02:12:34.690069    4110 start.go:293] postStartSetup for "ha-857000-m04" (driver="hyperkit")
	I0917 02:12:34.690105    4110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:12:34.690128    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.690331    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:12:34.690344    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:34.690448    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.690550    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.690643    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.690734    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	I0917 02:12:34.729693    4110 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:12:34.733386    4110 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:12:34.733399    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:12:34.733491    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:12:34.733629    4110 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:12:34.733635    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:12:34.733801    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:12:34.743555    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:12:34.777005    4110 start.go:296] duration metric: took 86.908647ms for postStartSetup
	I0917 02:12:34.777029    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.777213    4110 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:12:34.777227    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:34.777324    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.777401    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.777484    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.777560    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	I0917 02:12:34.811015    4110 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:12:34.811085    4110 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:12:34.865249    4110 fix.go:56] duration metric: took 15.683145042s for fixHost
	I0917 02:12:34.865277    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:34.865435    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.865528    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.865626    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.865720    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.865866    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:34.866008    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:34.866017    4110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:12:34.922683    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726564355.020144093
	
	I0917 02:12:34.922697    4110 fix.go:216] guest clock: 1726564355.020144093
	I0917 02:12:34.922703    4110 fix.go:229] Guest: 2024-09-17 02:12:35.020144093 -0700 PDT Remote: 2024-09-17 02:12:34.865267 -0700 PDT m=+127.793621612 (delta=154.877093ms)
	I0917 02:12:34.922714    4110 fix.go:200] guest clock delta is within tolerance: 154.877093ms
	I0917 02:12:34.922718    4110 start.go:83] releasing machines lock for "ha-857000-m04", held for 15.740632652s
	I0917 02:12:34.922744    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.922875    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetIP
	I0917 02:12:34.945234    4110 out.go:177] * Found network options:
	I0917 02:12:34.965134    4110 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0917 02:12:34.986412    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:12:34.986446    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:12:34.986459    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:12:34.986477    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.987363    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.987619    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.987838    4110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0917 02:12:34.987863    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:12:34.987882    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	W0917 02:12:34.987901    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:12:34.987917    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:12:34.988015    4110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:12:34.988040    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:34.988144    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.988241    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.988362    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.988430    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.988562    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.988636    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.988712    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	I0917 02:12:34.988798    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	W0917 02:12:35.089466    4110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:12:35.089538    4110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:12:35.103798    4110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:12:35.103814    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:12:35.103888    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:12:35.122855    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:12:35.131456    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:12:35.140120    4110 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:12:35.140187    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:12:35.148614    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:12:35.156897    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:12:35.165192    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:12:35.173754    4110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:12:35.182471    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:12:35.191008    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:12:35.199448    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:12:35.207926    4110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:12:35.216411    4110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:12:35.228568    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:35.327014    4110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:12:35.346549    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:12:35.346628    4110 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:12:35.370011    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:12:35.382502    4110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:12:35.397499    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:12:35.408840    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:12:35.420206    4110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:12:35.442422    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:12:35.453508    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:12:35.468375    4110 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:12:35.471279    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:12:35.479407    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:12:35.492955    4110 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:12:35.593589    4110 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:12:35.695477    4110 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:12:35.695504    4110 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:12:35.710594    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:35.826600    4110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:12:38.101010    4110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.274345081s)
	I0917 02:12:38.101138    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:12:38.113882    4110 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:12:38.128373    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:12:38.140107    4110 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:12:38.249684    4110 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:12:38.361672    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:38.469978    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:12:38.489760    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:12:38.502395    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:38.604591    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:12:38.669590    4110 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:12:38.669684    4110 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:12:38.674420    4110 start.go:563] Will wait 60s for crictl version
	I0917 02:12:38.674483    4110 ssh_runner.go:195] Run: which crictl
	I0917 02:12:38.677707    4110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:12:38.702126    4110 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:12:38.702225    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:12:38.719390    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:12:38.757457    4110 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:12:38.799117    4110 out.go:177]   - env NO_PROXY=192.169.0.5
	I0917 02:12:38.819990    4110 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0917 02:12:38.841085    4110 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	I0917 02:12:38.862007    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetIP
	I0917 02:12:38.862240    4110 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:12:38.865326    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:12:38.874823    4110 mustload.go:65] Loading cluster: ha-857000
	I0917 02:12:38.875009    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:38.875239    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:38.875265    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:38.884252    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52315
	I0917 02:12:38.884596    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:38.885007    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:38.885024    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:38.885217    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:38.885327    4110 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:12:38.885411    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:38.885502    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 4124
	I0917 02:12:38.886472    4110 host.go:66] Checking if "ha-857000" exists ...
	I0917 02:12:38.886740    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:38.886764    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:38.895399    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52317
	I0917 02:12:38.895752    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:38.896084    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:38.896095    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:38.896312    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:38.896445    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:12:38.896532    4110 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.8
	I0917 02:12:38.896538    4110 certs.go:194] generating shared ca certs ...
	I0917 02:12:38.896550    4110 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:12:38.896701    4110 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:12:38.896754    4110 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:12:38.896764    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:12:38.896789    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:12:38.896809    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:12:38.896826    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:12:38.896910    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:12:38.896963    4110 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:12:38.896974    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:12:38.897008    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:12:38.897042    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:12:38.897070    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:12:38.897139    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:12:38.897176    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:12:38.897196    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:12:38.897214    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:38.897242    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:12:38.917488    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:12:38.937120    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:12:38.956856    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:12:38.976762    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:12:38.997198    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:12:39.018037    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:12:39.040033    4110 ssh_runner.go:195] Run: openssl version
	I0917 02:12:39.044757    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:12:39.053844    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:12:39.057290    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:12:39.057337    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:12:39.061592    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:12:39.070092    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:12:39.078554    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:39.082016    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:39.082086    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:39.086282    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:12:39.094779    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:12:39.103890    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:12:39.107498    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:12:39.107551    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:12:39.111799    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:12:39.120941    4110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:12:39.124549    4110 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 02:12:39.124586    4110 kubeadm.go:934] updating node {m04 192.169.0.8 0 v1.31.1 docker false true} ...
	I0917 02:12:39.124645    4110 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:12:39.124713    4110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:12:39.132685    4110 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:12:39.132752    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0917 02:12:39.140189    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 02:12:39.153737    4110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:12:39.167480    4110 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:12:39.170335    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:12:39.180131    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:39.274978    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:12:39.290344    4110 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0917 02:12:39.290539    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:39.312606    4110 out.go:177] * Verifying Kubernetes components...
	I0917 02:12:39.332523    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:39.447567    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:12:39.466307    4110 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:12:39.466524    4110 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x673a720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 02:12:39.466571    4110 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0917 02:12:39.467449    4110 node_ready.go:35] waiting up to 6m0s for node "ha-857000-m04" to be "Ready" ...
	I0917 02:12:39.467568    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:12:39.467575    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.467585    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.467591    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.470632    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:39.969561    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:12:39.969576    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.969585    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.969590    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.972203    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:39.972562    4110 node_ready.go:49] node "ha-857000-m04" has status "Ready":"True"
	I0917 02:12:39.972573    4110 node_ready.go:38] duration metric: took 505.091961ms for node "ha-857000-m04" to be "Ready" ...
	I0917 02:12:39.972579    4110 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:12:39.972614    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:39.972619    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.972625    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.972629    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.976988    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:39.982728    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:39.982773    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:39.982778    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.982795    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.982801    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.985018    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:39.985518    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:39.985526    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.985532    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.985536    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.987300    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:40.482877    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:40.482889    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:40.482894    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:40.482898    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:40.485392    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:40.485952    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:40.485960    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:40.485965    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:40.485972    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:40.487726    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:40.984290    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:40.984330    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:40.984337    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:40.984340    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:40.986636    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:40.987126    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:40.987134    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:40.987140    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:40.987144    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:40.989077    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:41.483798    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:41.483813    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:41.483838    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:41.483842    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:41.485913    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:41.486349    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:41.486357    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:41.486363    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:41.486366    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:41.487997    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:41.984399    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:41.984423    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:41.984436    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:41.984441    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:41.987692    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:41.988563    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:41.988571    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:41.988576    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:41.988580    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:41.990387    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:41.990837    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:42.483597    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:42.483651    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:42.483720    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:42.483731    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:42.486451    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:42.487002    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:42.487009    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:42.487015    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:42.487019    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:42.488735    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:42.984178    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:42.984202    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:42.984244    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:42.984250    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:42.987573    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:42.988040    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:42.988049    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:42.988056    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:42.988060    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:42.989664    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:43.484870    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:43.484884    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:43.484891    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:43.484894    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:43.487141    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:43.487687    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:43.487695    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:43.487701    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:43.487705    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:43.489384    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:43.985004    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:43.985028    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:43.985040    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:43.985047    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:43.988376    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:43.989251    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:43.989258    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:43.989264    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:43.989274    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:43.991010    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:43.991366    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:44.483323    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:44.483341    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:44.483350    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:44.483355    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:44.486151    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:44.486714    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:44.486722    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:44.486727    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:44.486732    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:44.488452    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:44.984530    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:44.984557    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:44.984569    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:44.984574    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:44.987518    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:44.988156    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:44.988163    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:44.988169    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:44.988173    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:44.989906    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:45.484413    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:45.484429    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:45.484436    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:45.484438    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:45.486664    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:45.487158    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:45.487166    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:45.487172    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:45.487180    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:45.488811    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:45.983568    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:45.983588    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:45.983597    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:45.983601    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:45.986094    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:45.986663    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:45.986670    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:45.986676    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:45.986681    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:45.988390    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:46.484237    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:46.484252    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:46.484258    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:46.484262    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:46.486548    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:46.487112    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:46.487120    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:46.487126    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:46.487130    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:46.488764    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:46.489074    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:46.984666    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:46.984685    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:46.984693    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:46.984699    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:46.987277    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:46.987747    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:46.987754    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:46.987760    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:46.987764    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:46.989871    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:47.483189    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:47.483204    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:47.483220    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:47.483225    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:47.485536    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:47.486040    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:47.486048    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:47.486053    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:47.486077    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:47.487968    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:47.983218    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:47.983261    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:47.983271    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:47.983276    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:47.985959    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:47.986467    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:47.986476    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:47.986480    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:47.986483    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:47.988256    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:48.483839    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:48.483855    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:48.483877    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:48.483881    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:48.486127    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:48.486742    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:48.486750    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:48.486756    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:48.486763    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:48.488482    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:48.983104    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:48.983116    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:48.983123    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:48.983126    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:48.986541    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:48.986974    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:48.986982    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:48.986988    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:48.987000    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:48.989572    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:48.989840    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:49.483113    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:49.483127    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:49.483135    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:49.483138    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:49.485418    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:49.485944    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:49.485952    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:49.485958    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:49.485965    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:49.488051    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:49.983392    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:49.983418    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:49.983430    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:49.983435    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:49.990100    4110 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 02:12:49.990521    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:49.990528    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:49.990534    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:49.990551    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:49.995841    4110 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 02:12:50.484489    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:50.484507    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:50.484516    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:50.484519    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:50.487282    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:50.487803    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:50.487815    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:50.487821    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:50.487826    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:50.489538    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:50.984752    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:50.984776    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:50.984788    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:50.984796    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:50.988059    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:50.988580    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:50.988587    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:50.988593    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:50.988597    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:50.990162    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:50.990537    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:51.483827    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:51.483847    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:51.483864    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:51.483902    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:51.487323    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:51.487924    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:51.487932    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:51.487937    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:51.487942    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:51.489844    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:51.983451    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:51.983470    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:51.983482    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:51.983488    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:51.986994    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:51.987525    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:51.987535    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:51.987543    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:51.987548    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:51.989115    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:52.483263    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:52.483288    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:52.483325    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:52.483332    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:52.486347    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:52.486988    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:52.486995    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:52.487001    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:52.487005    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:52.488688    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:52.983765    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:52.983790    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:52.983801    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:52.983810    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:52.986675    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:52.987089    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:52.987119    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:52.987125    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:52.987129    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:52.988627    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:53.484927    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:53.484941    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:53.484948    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:53.484951    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:53.487216    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:53.487660    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:53.487667    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:53.487673    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:53.487676    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:53.489219    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:53.489560    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:53.984242    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:53.984261    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:53.984274    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:53.984280    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:53.986802    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:53.987318    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:53.987326    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:53.987333    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:53.987336    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:53.989152    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:54.483277    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:54.483309    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:54.483353    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:54.483368    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:54.486304    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:54.486703    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:54.486709    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:54.486715    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:54.486718    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:54.488409    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:54.984401    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:54.984421    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:54.984432    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:54.984436    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:54.987150    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:54.987731    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:54.987739    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:54.987745    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:54.987762    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:54.990093    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:55.484219    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:55.484245    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:55.484263    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:55.484270    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:55.487478    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:55.488038    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:55.488046    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:55.488052    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:55.488055    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:55.489736    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:55.490063    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:55.983721    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:55.983738    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:55.983747    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:55.983751    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:55.986467    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:55.986910    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:55.986918    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:55.986924    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:55.986927    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:55.988668    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:56.483680    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:56.483698    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:56.483705    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:56.483708    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:56.486006    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:56.486509    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:56.486517    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:56.486523    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:56.486526    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:56.488267    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:56.984953    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:56.984979    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:56.984991    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:56.984998    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:56.988958    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:56.989556    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:56.989567    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:56.989575    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:56.989580    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:56.991555    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:57.483204    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:57.483220    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.483244    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.483257    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.489651    4110 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 02:12:57.491669    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.491685    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.491693    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.491697    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.500745    4110 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0917 02:12:57.502366    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.502386    4110 pod_ready.go:82] duration metric: took 17.519343583s for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.502398    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.502483    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nl5j5
	I0917 02:12:57.502497    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.502507    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.502512    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.512509    4110 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0917 02:12:57.513793    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.513807    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.513817    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.513823    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.522244    4110 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 02:12:57.522585    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.522595    4110 pod_ready.go:82] duration metric: took 20.190892ms for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.522609    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.522650    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000
	I0917 02:12:57.522656    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.522662    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.522666    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.527526    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:57.528075    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.528084    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.528089    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.528100    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.530647    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:57.531009    4110 pod_ready.go:93] pod "etcd-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.531019    4110 pod_ready.go:82] duration metric: took 8.403704ms for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.531025    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.531068    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m02
	I0917 02:12:57.531073    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.531082    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.531087    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.533324    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:57.533687    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:57.533694    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.533700    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.533704    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.535601    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:57.535875    4110 pod_ready.go:93] pod "etcd-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.535883    4110 pod_ready.go:82] duration metric: took 4.853562ms for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.535902    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.535944    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m03
	I0917 02:12:57.535950    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.535956    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.535960    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.537587    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:57.537964    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:57.537972    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.537978    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.537982    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.539462    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:57.539797    4110 pod_ready.go:93] pod "etcd-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.539805    4110 pod_ready.go:82] duration metric: took 3.894392ms for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.539816    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.684040    4110 request.go:632] Waited for 144.185674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:12:57.684081    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:12:57.684104    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.684125    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.684132    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.686547    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:57.883303    4110 request.go:632] Waited for 196.17665ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.883378    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.883388    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.883398    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.883406    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.886942    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:57.887555    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.887569    4110 pod_ready.go:82] duration metric: took 347.737487ms for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.887576    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.083903    4110 request.go:632] Waited for 196.258589ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:12:58.084076    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:12:58.084095    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.084104    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.084111    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.087323    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:58.284752    4110 request.go:632] Waited for 196.829301ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:58.284841    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:58.284851    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.284863    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.284871    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.287836    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:58.288234    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:58.288243    4110 pod_ready.go:82] duration metric: took 400.655079ms for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.288251    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.484581    4110 request.go:632] Waited for 196.285151ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:58.484627    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:58.484634    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.484670    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.484676    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.487401    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:58.683590    4110 request.go:632] Waited for 195.669934ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:58.683635    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:58.683643    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.683695    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.683709    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.687024    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:58.687397    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:58.687407    4110 pod_ready.go:82] duration metric: took 399.144074ms for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.687414    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.884795    4110 request.go:632] Waited for 197.34012ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:12:58.884845    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:12:58.884854    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.884862    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.884886    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.887327    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:59.083807    4110 request.go:632] Waited for 195.949253ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:59.083945    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:59.083961    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.083973    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.083980    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.087431    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:59.087851    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:59.087864    4110 pod_ready.go:82] duration metric: took 400.438219ms for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.087874    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.283487    4110 request.go:632] Waited for 195.551174ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:12:59.283570    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:12:59.283587    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.283598    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.283604    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.286668    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:59.483240    4110 request.go:632] Waited for 196.050684ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:59.483272    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:59.483277    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.483284    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.483287    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.485481    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:59.485790    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:59.485799    4110 pod_ready.go:82] duration metric: took 397.912163ms for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.485808    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.684196    4110 request.go:632] Waited for 198.346846ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:59.684283    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:59.684289    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.684295    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.684299    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.686349    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:59.883921    4110 request.go:632] Waited for 197.130794ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:59.883972    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:59.883980    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.884030    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.884039    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.888316    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:59.888770    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:59.888788    4110 pod_ready.go:82] duration metric: took 402.964156ms for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.888815    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.083631    4110 request.go:632] Waited for 194.730555ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:13:00.083713    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:13:00.083720    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.083728    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.083732    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.086353    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:00.285261    4110 request.go:632] Waited for 198.400376ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:13:00.285348    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:13:00.285356    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.285364    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.285370    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.287853    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:00.288149    4110 pod_ready.go:93] pod "kube-proxy-528ht" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:00.288159    4110 pod_ready.go:82] duration metric: took 399.322905ms for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.288167    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.484621    4110 request.go:632] Waited for 196.39101ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:13:00.484716    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:13:00.484727    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.484737    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.484744    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.488045    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:00.685321    4110 request.go:632] Waited for 196.686181ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:13:00.685381    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:13:00.685438    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.685455    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.685464    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.688919    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:00.689362    4110 pod_ready.go:93] pod "kube-proxy-g9wxm" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:00.689374    4110 pod_ready.go:82] duration metric: took 401.194339ms for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.689383    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.884950    4110 request.go:632] Waited for 195.521785ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:13:00.884994    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:13:00.885018    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.885025    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.885034    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.887231    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:01.084761    4110 request.go:632] Waited for 197.012037ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:13:01.084795    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:13:01.084800    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.084806    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.084810    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.088892    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:13:01.089243    4110 pod_ready.go:93] pod "kube-proxy-vskbj" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:01.089253    4110 pod_ready.go:82] duration metric: took 399.857039ms for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.089261    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.284602    4110 request.go:632] Waited for 195.290385ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:13:01.284640    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:13:01.284645    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.284672    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.284680    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.286636    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:13:01.483312    4110 request.go:632] Waited for 196.269648ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:13:01.483391    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:13:01.483403    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.483413    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.483434    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.486551    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:01.486934    4110 pod_ready.go:93] pod "kube-proxy-zrqvr" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:01.486943    4110 pod_ready.go:82] duration metric: took 397.670619ms for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.486950    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.683659    4110 request.go:632] Waited for 196.646108ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:13:01.683796    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:13:01.683807    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.683819    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.683825    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.686996    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:01.884224    4110 request.go:632] Waited for 196.55945ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:13:01.884363    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:13:01.884374    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.884385    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.884393    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.888135    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:01.888538    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:01.888551    4110 pod_ready.go:82] duration metric: took 401.588084ms for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.888559    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:02.083387    4110 request.go:632] Waited for 194.732026ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:13:02.083482    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:13:02.083493    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.083503    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.083512    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.087127    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:02.284704    4110 request.go:632] Waited for 197.205174ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:13:02.284756    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:13:02.284761    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.284768    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.284773    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.287752    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:02.288038    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:02.288049    4110 pod_ready.go:82] duration metric: took 399.476957ms for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:02.288056    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:02.485154    4110 request.go:632] Waited for 197.02881ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:13:02.485191    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:13:02.485198    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.485206    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.485211    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.487672    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:02.685336    4110 request.go:632] Waited for 197.331043ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:13:02.685388    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:13:02.685397    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.685411    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.685417    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.688565    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:02.688910    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:02.688918    4110 pod_ready.go:82] duration metric: took 400.85077ms for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:02.688929    4110 pod_ready.go:39] duration metric: took 22.715951136s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:13:02.688942    4110 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 02:13:02.689000    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:13:02.699631    4110 system_svc.go:56] duration metric: took 10.684367ms WaitForService to wait for kubelet
	I0917 02:13:02.699646    4110 kubeadm.go:582] duration metric: took 23.408872965s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:13:02.699663    4110 node_conditions.go:102] verifying NodePressure condition ...
	I0917 02:13:02.884773    4110 request.go:632] Waited for 185.024169ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0917 02:13:02.884858    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0917 02:13:02.884867    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.884878    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.884887    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.888704    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:02.889505    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:13:02.889516    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:13:02.889528    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:13:02.889534    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:13:02.889537    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:13:02.889540    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:13:02.889543    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:13:02.889545    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:13:02.889549    4110 node_conditions.go:105] duration metric: took 189.878189ms to run NodePressure ...
	I0917 02:13:02.889557    4110 start.go:241] waiting for startup goroutines ...
	I0917 02:13:02.889572    4110 start.go:255] writing updated cluster config ...
	I0917 02:13:02.889954    4110 ssh_runner.go:195] Run: rm -f paused
	I0917 02:13:02.930446    4110 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0917 02:13:02.983109    4110 out.go:201] 
	W0917 02:13:03.020673    4110 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0917 02:13:03.057789    4110 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0917 02:13:03.135680    4110 out.go:177] * Done! kubectl is now configured to use "ha-857000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 17 09:12:18 ha-857000 cri-dockerd[1413]: time="2024-09-17T09:12:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc1d198ffe0b277108e5042e64604ceb887f7f17cd5814893fbf789f1ce180f0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.316039322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.316201907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.316216597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.316284213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.356401685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.356591613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.356646706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.356901392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.358210462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.358271414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.358284287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.358347315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.361819988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.361879924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.361892293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.361954784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:48 ha-857000 dockerd[1160]: time="2024-09-17T09:12:48.289404793Z" level=info msg="ignoring event" container=67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 09:12:48 ha-857000 dockerd[1166]: time="2024-09-17T09:12:48.290629069Z" level=info msg="shim disconnected" id=67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7 namespace=moby
	Sep 17 09:12:48 ha-857000 dockerd[1166]: time="2024-09-17T09:12:48.290966877Z" level=warning msg="cleaning up after shim disconnected" id=67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7 namespace=moby
	Sep 17 09:12:48 ha-857000 dockerd[1166]: time="2024-09-17T09:12:48.291008241Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 09:13:00 ha-857000 dockerd[1166]: time="2024-09-17T09:13:00.269678049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:13:00 ha-857000 dockerd[1166]: time="2024-09-17T09:13:00.269745426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:13:00 ha-857000 dockerd[1166]: time="2024-09-17T09:13:00.269758363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:13:00 ha-857000 dockerd[1166]: time="2024-09-17T09:13:00.269841312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d940d576a500a       6e38f40d628db                                                                                         4 seconds ago        Running             storage-provisioner       2                   6fb8068a5c29f       storage-provisioner
	119f2deb32f13       8c811b4aec35f                                                                                         46 seconds ago       Running             busybox                   1                   fc1d198ffe0b2       busybox-7dff88458-4jzg8
	b7aa83ae3a822       c69fa2e9cbf5f                                                                                         46 seconds ago       Running             coredns                   1                   f4e7a7b3c65e5       coredns-7c65d6cfc9-nl5j5
	c37a677e31180       60c005f310ff3                                                                                         46 seconds ago       Running             kube-proxy                1                   5294422217d99       kube-proxy-vskbj
	3d889c7c8da7e       12968670680f4                                                                                         46 seconds ago       Running             kindnet-cni               1                   80326e6e99372       kindnet-7pf7v
	7b8b62bf7340c       c69fa2e9cbf5f                                                                                         46 seconds ago       Running             coredns                   1                   f4cf87ea66207       coredns-7c65d6cfc9-fg65r
	67814a4514b10       6e38f40d628db                                                                                         47 seconds ago       Exited              storage-provisioner       1                   6fb8068a5c29f       storage-provisioner
	ca7fe8ccd4c53       175ffd71cce3d                                                                                         About a minute ago   Running             kube-controller-manager   6                   77f536a07a3a6       kube-controller-manager-ha-857000
	475dedee37228       6bab7719df100                                                                                         About a minute ago   Running             kube-apiserver            6                   0968090389d54       kube-apiserver-ha-857000
	37d6d6479e30b       38af8ddebf499                                                                                         2 minutes ago        Running             kube-vip                  1                   2842ed202c474       kube-vip-ha-857000
	00ff29c213716       9aa1fad941575                                                                                         2 minutes ago        Running             kube-scheduler            2                   309841a63d772       kube-scheduler-ha-857000
	13b7f8a93ad49       175ffd71cce3d                                                                                         2 minutes ago        Exited              kube-controller-manager   5                   77f536a07a3a6       kube-controller-manager-ha-857000
	8c0804e78de8f       2e96e5913fc06                                                                                         2 minutes ago        Running             etcd                      2                   6cfb11ed1d6ba       etcd-ha-857000
	a18a6b023cd60       6bab7719df100                                                                                         2 minutes ago        Exited              kube-apiserver            5                   0968090389d54       kube-apiserver-ha-857000
	034279696db8f       38af8ddebf499                                                                                         6 minutes ago        Exited              kube-vip                  0                   4205e70bfa1bb       kube-vip-ha-857000
	d9fae1497b048       9aa1fad941575                                                                                         6 minutes ago        Exited              kube-scheduler            1                   37d9fe68f2e59       kube-scheduler-ha-857000
	f4f59b8c76404       2e96e5913fc06                                                                                         6 minutes ago        Exited              etcd                      1                   a23094a650513       etcd-ha-857000
	fe908ac73b00f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   9 minutes ago        Exited              busybox                   0                   80864159ef38e       busybox-7dff88458-4jzg8
	521527f17691c       c69fa2e9cbf5f                                                                                         11 minutes ago       Exited              coredns                   0                   aa21641a5b16e       coredns-7c65d6cfc9-nl5j5
	f991c8e956d90       c69fa2e9cbf5f                                                                                         11 minutes ago       Exited              coredns                   0                   da08087b51cd9       coredns-7c65d6cfc9-fg65r
	5d84a01abd3e7       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              12 minutes ago       Exited              kindnet-cni               0                   38db6fab73655       kindnet-7pf7v
	0b03e5e488939       60c005f310ff3                                                                                         12 minutes ago       Exited              kube-proxy                0                   067bc1b2ad7fa       kube-proxy-vskbj
	
	
	==> coredns [521527f17691] <==
	[INFO] 10.244.2.2:33230 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100028s
	[INFO] 10.244.2.2:37727 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077614s
	[INFO] 10.244.2.2:51233 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090375s
	[INFO] 10.244.1.2:43082 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115984s
	[INFO] 10.244.1.2:45048 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000071244s
	[INFO] 10.244.1.2:48877 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106601s
	[INFO] 10.244.1.2:59235 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000068348s
	[INFO] 10.244.1.2:53808 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064222s
	[INFO] 10.244.1.2:54982 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064992s
	[INFO] 10.244.0.4:59177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012236s
	[INFO] 10.244.0.4:59468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096608s
	[INFO] 10.244.0.4:49953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108018s
	[INFO] 10.244.2.2:36658 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081427s
	[INFO] 10.244.1.2:53166 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140458s
	[INFO] 10.244.1.2:60442 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069729s
	[INFO] 10.244.0.4:60564 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007076s
	[INFO] 10.244.0.4:57696 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000125726s
	[INFO] 10.244.2.2:33447 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114855s
	[INFO] 10.244.2.2:49647 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000058138s
	[INFO] 10.244.2.2:55869 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00009725s
	[INFO] 10.244.1.2:49826 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096631s
	[INFO] 10.244.1.2:33376 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000046366s
	[INFO] 10.244.1.2:47184 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000084276s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7b8b62bf7340] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40424 - 46793 "HINFO IN 2652948645074262826.4033840954787183129. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019948501s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[345670875]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.718) (total time: 30000ms):
	Trace[345670875]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (09:12:48.718)
	Trace[345670875]: [30.000647992s] [30.000647992s] END
	[INFO] plugin/kubernetes: Trace[990255223]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.716) (total time: 30002ms):
	Trace[990255223]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (09:12:48.718)
	Trace[990255223]: [30.002122547s] [30.002122547s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1561533284]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.716) (total time: 30004ms):
	Trace[1561533284]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (09:12:48.720)
	Trace[1561533284]: [30.004471134s] [30.004471134s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [b7aa83ae3a82] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48468 - 41934 "HINFO IN 5248560894606224369.8303849678443807322. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019682687s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[134011415]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.720) (total time: 30000ms):
	Trace[134011415]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (09:12:48.721)
	Trace[134011415]: [30.000772699s] [30.000772699s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1931337556]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.720) (total time: 30001ms):
	Trace[1931337556]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (09:12:48.721)
	Trace[1931337556]: [30.001621273s] [30.001621273s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2093896532]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.720) (total time: 30001ms):
	Trace[2093896532]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (09:12:48.721)
	Trace[2093896532]: [30.001436763s] [30.001436763s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [f991c8e956d9] <==
	[INFO] 10.244.1.2:36169 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206963s
	[INFO] 10.244.1.2:33814 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000088589s
	[INFO] 10.244.1.2:57385 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.000535008s
	[INFO] 10.244.0.4:54856 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135529s
	[INFO] 10.244.0.4:47831 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.019088159s
	[INFO] 10.244.0.4:46325 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201714s
	[INFO] 10.244.0.4:45239 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000255383s
	[INFO] 10.244.0.4:55042 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000141827s
	[INFO] 10.244.2.2:47888 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108401s
	[INFO] 10.244.2.2:41486 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00044994s
	[INFO] 10.244.2.2:50623 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082841s
	[INFO] 10.244.1.2:54143 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098038s
	[INFO] 10.244.1.2:38802 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046632s
	[INFO] 10.244.0.4:39532 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.002579505s
	[INFO] 10.244.2.2:53978 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077749s
	[INFO] 10.244.2.2:60710 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092889s
	[INFO] 10.244.2.2:51255 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000044117s
	[INFO] 10.244.1.2:36996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056219s
	[INFO] 10.244.1.2:39487 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090704s
	[INFO] 10.244.0.4:39831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131192s
	[INFO] 10.244.0.4:35770 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154922s
	[INFO] 10.244.2.2:45820 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113973s
	[INFO] 10.244.1.2:44519 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120184s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-857000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-857000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=ha-857000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T02_00_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:00:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:13:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:00:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:00:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:00:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:01:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-857000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 54854ca4cf93431694d9ad27a68ef89d
	  System UUID:                f6fb40b6-0000-0000-91c0-dbf4ea1b682c
	  Boot ID:                    a1af0517-f4c2-4eae-96db-f7479d049a6a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4jzg8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m32s
	  kube-system                 coredns-7c65d6cfc9-fg65r             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-7c65d6cfc9-nl5j5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-ha-857000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-7pf7v                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-857000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-857000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-vskbj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-857000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-857000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 46s                    kube-proxy       
	  Normal  Starting                 12m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                    kubelet          Node ha-857000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                    kubelet          Node ha-857000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                    kubelet          Node ha-857000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  NodeReady                11m                    kubelet          Node ha-857000 status is now: NodeReady
	  Normal  RegisteredNode           11m                    node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  RegisteredNode           9m53s                  node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  RegisteredNode           7m44s                  node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  NodeHasSufficientMemory  2m20s (x8 over 2m20s)  kubelet          Node ha-857000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m20s (x8 over 2m20s)  kubelet          Node ha-857000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m20s (x7 over 2m20s)  kubelet          Node ha-857000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           86s                    node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  RegisteredNode           85s                    node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  RegisteredNode           57s                    node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	
	
	Name:               ha-857000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-857000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=ha-857000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T02_01_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:01:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:13:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:01:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:01:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:01:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:11:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-857000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 39fe1ffb0a9e4afb9fa3c09c6b13fed7
	  System UUID:                19404b28-0000-0000-842d-d4858a62cbd3
	  Boot ID:                    625329b0-bed9-4da5-90fd-2859c5b852dc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mhjf6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m32s
	  kube-system                 etcd-ha-857000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-vh2h2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-857000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-857000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-zrqvr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-857000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-857000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 82s                kube-proxy       
	  Normal   Starting                 7m48s              kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-857000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-857000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-857000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   RegisteredNode           9m53s              node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   Starting                 7m52s              kubelet          Starting kubelet.
	  Warning  Rebooted                 7m52s              kubelet          Node ha-857000-m02 has been rebooted, boot id: b4c87c19-d878-45a1-b0c5-442ae4d2861b
	  Normal   NodeHasSufficientPID     7m52s              kubelet          Node ha-857000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  7m52s              kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m52s              kubelet          Node ha-857000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m52s              kubelet          Node ha-857000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           7m44s              node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   Starting                 98s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  98s (x8 over 98s)  kubelet          Node ha-857000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    98s (x8 over 98s)  kubelet          Node ha-857000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     98s (x7 over 98s)  kubelet          Node ha-857000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  98s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           86s                node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   RegisteredNode           85s                node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   RegisteredNode           57s                node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	
	
	Name:               ha-857000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-857000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=ha-857000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T02_03_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:03:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:13:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:12:01 +0000   Tue, 17 Sep 2024 09:03:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:12:01 +0000   Tue, 17 Sep 2024 09:03:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:12:01 +0000   Tue, 17 Sep 2024 09:03:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:12:01 +0000   Tue, 17 Sep 2024 09:03:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-857000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 69dae176c7914316a8660d135e30666c
	  System UUID:                3d8f47ea-0000-0000-a80b-a24a99cad96e
	  Boot ID:                    e1620cd8-3a62-4426-a27e-eeaf7b39756d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5x9l8                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m32s
	  kube-system                 etcd-ha-857000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m59s
	  kube-system                 kindnet-vc6z5                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-857000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-857000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-g9wxm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-857000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-857000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 61s                kube-proxy       
	  Normal   Starting                 9m56s              kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-857000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-857000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-857000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m59s              node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           9m58s              node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           9m53s              node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           7m44s              node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           86s                node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           85s                node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   Starting                 65s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  64s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  64s                kubelet          Node ha-857000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s                kubelet          Node ha-857000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s                kubelet          Node ha-857000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 64s                kubelet          Node ha-857000-m03 has been rebooted, boot id: e1620cd8-3a62-4426-a27e-eeaf7b39756d
	  Normal   RegisteredNode           57s                node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	
	
	Name:               ha-857000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-857000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=ha-857000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T02_04_06_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:04:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:12:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:12:39 +0000   Tue, 17 Sep 2024 09:12:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:12:39 +0000   Tue, 17 Sep 2024 09:12:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:12:39 +0000   Tue, 17 Sep 2024 09:12:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:12:39 +0000   Tue, 17 Sep 2024 09:12:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-857000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 15c3f15f82fe4af0a76f2083dcf53a13
	  System UUID:                32bc423b-0000-0000-90a4-5417ea5ec912
	  Boot ID:                    cd10fc3d-989b-457a-8925-881b38fed37e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4jk9v       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m
	  kube-system                 kube-proxy-528ht    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 8m53s              kube-proxy       
	  Normal   Starting                 24s                kube-proxy       
	  Normal   NodeHasSufficientMemory  9m (x2 over 9m)    kubelet          Node ha-857000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m (x2 over 9m)    kubelet          Node ha-857000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m (x2 over 9m)    kubelet          Node ha-857000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m                 kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           8m59s              node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           8m58s              node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           8m58s              node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   NodeReady                8m37s              kubelet          Node ha-857000-m04 status is now: NodeReady
	  Normal   RegisteredNode           7m44s              node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           86s                node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           85s                node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           57s                node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   NodeNotReady             46s                node-controller  Node ha-857000-m04 status is now: NodeNotReady
	  Normal   Starting                 26s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  26s (x3 over 26s)  kubelet          Node ha-857000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    26s (x3 over 26s)  kubelet          Node ha-857000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     26s (x3 over 26s)  kubelet          Node ha-857000-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 26s (x2 over 26s)  kubelet          Node ha-857000-m04 has been rebooted, boot id: cd10fc3d-989b-457a-8925-881b38fed37e
	  Normal   NodeReady                26s (x2 over 26s)  kubelet          Node ha-857000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035828] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007970] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.690889] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006763] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.660573] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.226234] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.530337] systemd-fstab-generator[461]: Ignoring "noauto" option for root device
	[  +0.102427] systemd-fstab-generator[473]: Ignoring "noauto" option for root device
	[  +1.905407] systemd-fstab-generator[1088]: Ignoring "noauto" option for root device
	[  +0.264183] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +0.055811] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.051134] systemd-fstab-generator[1138]: Ignoring "noauto" option for root device
	[  +0.114709] systemd-fstab-generator[1152]: Ignoring "noauto" option for root device
	[  +2.420834] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.093862] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	[  +0.101457] systemd-fstab-generator[1390]: Ignoring "noauto" option for root device
	[  +0.112591] systemd-fstab-generator[1405]: Ignoring "noauto" option for root device
	[  +0.460313] systemd-fstab-generator[1565]: Ignoring "noauto" option for root device
	[  +6.769000] kauditd_printk_skb: 212 callbacks suppressed
	[Sep17 09:11] kauditd_printk_skb: 40 callbacks suppressed
	[Sep17 09:12] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [8c0804e78de8] <==
	{"level":"warn","ts":"2024-09-17T09:11:52.636053Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T09:11:52.642983Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T09:11:52.663148Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T09:11:52.743398Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T09:11:52.834019Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"4843c5334ac100b7","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:11:52.834231Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"4843c5334ac100b7","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:11:56.836371Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"4843c5334ac100b7","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:11:56.836465Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"4843c5334ac100b7","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:11:57.474154Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:11:57.474326Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:12:00.837987Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"4843c5334ac100b7","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:12:00.838171Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"4843c5334ac100b7","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:12:02.474909Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:12:02.474924Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-17T09:12:02.527934Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:12:02.528179Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:12:02.553614Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:12:02.656074Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"4843c5334ac100b7","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-17T09:12:02.656117Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:12:02.671567Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"4843c5334ac100b7","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-17T09:12:02.671803Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:12:03.645158Z","caller":"traceutil/trace.go:171","msg":"trace[1621339428] linearizableReadLoop","detail":"{readStateIndex:2219; appliedIndex:2219; }","duration":"123.347982ms","start":"2024-09-17T09:12:03.521794Z","end":"2024-09-17T09:12:03.645142Z","steps":["trace[1621339428] 'read index received'  (duration: 123.341929ms)","trace[1621339428] 'applied index is now lower than readState.Index'  (duration: 4.903µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-17T09:12:03.645527Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.681467ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-g9wxm\" ","response":"range_response_count:1 size:5191"}
	{"level":"info","ts":"2024-09-17T09:12:03.645594Z","caller":"traceutil/trace.go:171","msg":"trace[2012729741] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-g9wxm; range_end:; response_count:1; response_revision:1897; }","duration":"123.79703ms","start":"2024-09-17T09:12:03.521791Z","end":"2024-09-17T09:12:03.645588Z","steps":["trace[2012729741] 'agreement among raft nodes before linearized reading'  (duration: 123.482937ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T09:12:03.647767Z","caller":"traceutil/trace.go:171","msg":"trace[1450641964] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1898; }","duration":"121.988853ms","start":"2024-09-17T09:12:03.525765Z","end":"2024-09-17T09:12:03.647754Z","steps":["trace[1450641964] 'process raft request'  (duration: 121.923204ms)"],"step_count":1}
	
	
	==> etcd [f4f59b8c7640] <==
	{"level":"info","ts":"2024-09-17T09:10:21.875702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:21.875849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:21.875892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:21.875927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:21.875956Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"warn","ts":"2024-09-17T09:10:23.692511Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275704,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:10:24.194017Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275704,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:10:24.278276Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ff70cdb626651bff","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:10:24.278324Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ff70cdb626651bff","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:10:24.301258Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-17T09:10:24.301488Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: no route to host"}
	{"level":"info","ts":"2024-09-17T09:10:24.470887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:24.470976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:24.470995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:24.471013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:24.471022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"warn","ts":"2024-09-17T09:10:24.694867Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275704,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:10:24.938557Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.746471868s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T09:10:24.938607Z","caller":"traceutil/trace.go:171","msg":"trace[802347161] range","detail":"{range_begin:; range_end:; }","duration":"1.746534049s","start":"2024-09-17T09:10:23.192066Z","end":"2024-09-17T09:10:24.938600Z","steps":["trace[802347161] 'agreement among raft nodes before linearized reading'  (duration: 1.746469617s)"],"step_count":1}
	{"level":"error","ts":"2024-09-17T09:10:24.938646Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: context canceled\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 09:13:06 up 2 min,  0 users,  load average: 1.09, 0.42, 0.15
	Linux ha-857000 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3d889c7c8da7] <==
	I0917 09:12:29.606417       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.7 Flags: [] Table: 0} 
	I0917 09:12:39.612429       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:12:39.612507       1 main.go:299] handling current node
	I0917 09:12:39.612519       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:12:39.612524       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:12:39.612810       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:12:39.612842       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:12:39.612912       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:12:39.612978       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:12:49.606629       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:12:49.606712       1 main.go:299] handling current node
	I0917 09:12:49.606742       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:12:49.606793       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:12:49.606920       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:12:49.606967       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:12:49.607060       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:12:49.607108       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:12:59.612269       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:12:59.612291       1 main.go:299] handling current node
	I0917 09:12:59.612301       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:12:59.612305       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:12:59.612392       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:12:59.612417       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:12:59.612453       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:12:59.612507       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [5d84a01abd3e] <==
	I0917 09:05:22.964948       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:32.966280       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:32.966503       1 main.go:299] handling current node
	I0917 09:05:32.966605       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:32.966739       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:32.966951       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:32.967059       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:32.967333       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:32.967449       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:42.964585       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:42.964999       1 main.go:299] handling current node
	I0917 09:05:42.965252       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:42.965422       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:42.965746       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:42.965829       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:42.966204       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:42.966357       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:52.965279       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:52.965376       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:52.965533       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:52.965592       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:52.965673       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:52.965753       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:52.965812       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:52.965902       1 main.go:299] handling current node
	
	
	==> kube-apiserver [475dedee3722] <==
	I0917 09:11:36.333360       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 09:11:36.335609       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 09:11:36.383731       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 09:11:36.383763       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 09:11:36.384428       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 09:11:36.385090       1 shared_informer.go:320] Caches are synced for configmaps
	I0917 09:11:36.385168       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0917 09:11:36.385606       1 aggregator.go:171] initial CRD sync complete...
	I0917 09:11:36.385745       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 09:11:36.386077       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 09:11:36.386187       1 cache.go:39] Caches are synced for autoregister controller
	I0917 09:11:36.388938       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 09:11:36.396198       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 09:11:36.396611       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0917 09:11:36.396812       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	W0917 09:11:36.438133       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0917 09:11:36.461867       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0917 09:11:36.465355       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 09:11:36.465387       1 policy_source.go:224] refreshing policies
	I0917 09:11:36.484251       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 09:11:36.540432       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 09:11:36.548136       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0917 09:11:36.554355       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0917 09:11:37.296848       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0917 09:11:37.666999       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	
	
	==> kube-apiserver [a18a6b023cd6] <==
	I0917 09:10:52.375949       1 options.go:228] external host was not specified, using 192.169.0.5
	I0917 09:10:52.377617       1 server.go:142] Version: v1.31.1
	I0917 09:10:52.377684       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:10:52.824178       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0917 09:10:52.824356       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 09:10:52.826684       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0917 09:10:52.828510       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0917 09:10:52.829505       1 instance.go:232] Using reconciler: lease
	W0917 09:11:12.810788       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0917 09:11:12.813364       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0917 09:11:12.831731       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0917 09:11:12.831919       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [13b7f8a93ad4] <==
	I0917 09:10:53.058887       1 serving.go:386] Generated self-signed cert in-memory
	I0917 09:10:53.469010       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0917 09:10:53.469133       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:10:53.478660       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 09:10:53.478827       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 09:10:53.478677       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0917 09:10:53.479256       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0917 09:11:13.838538       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-controller-manager [ca7fe8ccd4c5] <==
	I0917 09:12:17.473758       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="14.760263ms"
	I0917 09:12:17.473945       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="34.651µs"
	I0917 09:12:18.632033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="35.896µs"
	I0917 09:12:18.776005       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.253969ms"
	I0917 09:12:18.776119       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="71.789µs"
	I0917 09:12:18.785648       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="39.503µs"
	I0917 09:12:18.798097       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-f4rqd EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-f4rqd\": the object has been modified; please apply your changes to the latest version and try again"
	I0917 09:12:18.798477       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"fbe3dede-bdc6-453b-baec-6a20140ca1b1", APIVersion:"v1", ResourceVersion:"261", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-f4rqd EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-f4rqd": the object has been modified; please apply your changes to the latest version and try again
	I0917 09:12:19.953163       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m04"
	I0917 09:12:19.967434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m04"
	I0917 09:12:20.682851       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m04"
	I0917 09:12:23.128380       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m04"
	I0917 09:12:25.083746       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m04"
	I0917 09:12:39.721967       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-857000-m04"
	I0917 09:12:39.722197       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m04"
	I0917 09:12:39.733466       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m04"
	I0917 09:12:40.010916       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m04"
	I0917 09:12:57.587381       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-f4rqd EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-f4rqd\": the object has been modified; please apply your changes to the latest version and try again"
	I0917 09:12:57.588538       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"fbe3dede-bdc6-453b-baec-6a20140ca1b1", APIVersion:"v1", ResourceVersion:"261", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-f4rqd EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-f4rqd": the object has been modified; please apply your changes to the latest version and try again
	I0917 09:12:57.619018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="58.719907ms"
	E0917 09:12:57.619070       1 replica_set.go:560] "Unhandled Error" err="sync \"kube-system/coredns-7c65d6cfc9\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-7c65d6cfc9\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0917 09:12:57.620470       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="97.988µs"
	I0917 09:12:57.624100       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-f4rqd EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-f4rqd\": the object has been modified; please apply your changes to the latest version and try again"
	I0917 09:12:57.624538       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"fbe3dede-bdc6-453b-baec-6a20140ca1b1", APIVersion:"v1", ResourceVersion:"261", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-f4rqd EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-f4rqd": the object has been modified; please apply your changes to the latest version and try again
	I0917 09:12:57.625793       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="62.927µs"
	
	
	==> kube-proxy [0b03e5e48893] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 09:00:59.069869       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 09:00:59.079118       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 09:00:59.079199       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 09:00:59.109184       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 09:00:59.109227       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 09:00:59.109245       1 server_linux.go:169] "Using iptables Proxier"
	I0917 09:00:59.111661       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 09:00:59.111847       1 server.go:483] "Version info" version="v1.31.1"
	I0917 09:00:59.111876       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:00:59.112952       1 config.go:199] "Starting service config controller"
	I0917 09:00:59.112979       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 09:00:59.112995       1 config.go:105] "Starting endpoint slice config controller"
	I0917 09:00:59.112998       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 09:00:59.113603       1 config.go:328] "Starting node config controller"
	I0917 09:00:59.113673       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 09:00:59.213587       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 09:00:59.213649       1 shared_informer.go:320] Caches are synced for service config
	I0917 09:00:59.213808       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c37a677e3118] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 09:12:19.054558       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 09:12:19.080090       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 09:12:19.080297       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 09:12:19.208559       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 09:12:19.208589       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 09:12:19.208607       1 server_linux.go:169] "Using iptables Proxier"
	I0917 09:12:19.212603       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 09:12:19.213076       1 server.go:483] "Version info" version="v1.31.1"
	I0917 09:12:19.213105       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:12:19.216919       1 config.go:199] "Starting service config controller"
	I0917 09:12:19.217067       1 config.go:105] "Starting endpoint slice config controller"
	I0917 09:12:19.217988       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 09:12:19.218116       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 09:12:19.228165       1 config.go:328] "Starting node config controller"
	I0917 09:12:19.228196       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 09:12:19.319175       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 09:12:19.319361       1 shared_informer.go:320] Caches are synced for service config
	I0917 09:12:19.328396       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [00ff29c21371] <==
	W0917 09:11:36.373943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 09:11:36.373983       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.374259       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 09:11:36.374300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.376668       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 09:11:36.376725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.376996       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 09:11:36.377204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.377457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 09:11:36.377528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.378762       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 09:11:36.378803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.381567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 09:11:36.381612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.381875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 09:11:36.382055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.382318       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 09:11:36.382353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.382449       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 09:11:36.382484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.382718       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 09:11:36.382767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.382890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 09:11:36.383104       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0917 09:11:36.446439       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d9fae1497b04] <==
	E0917 09:09:54.047035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:01.417081       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:01.417178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:02.586956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:02.587049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:09.339944       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:09.340160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:12.375946       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:12.375997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:14.579545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:14.579979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:18.357149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:18.357192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:19.971293       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:19.971663       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:22.259174       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:22.259229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:24.413900       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:24.413975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	I0917 09:10:24.953479       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 09:10:24.953762       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0917 09:10:24.953909       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	E0917 09:10:24.953957       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	I0917 09:10:24.955052       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 09:10:24.955061       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 09:12:17 ha-857000 kubelet[1572]: E0917 09:12:17.230909    1572 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-vip-ha-857000\" already exists" pod="kube-system/kube-vip-ha-857000"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.239224    1572 apiserver.go:52] "Watching apiserver"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.296247    1572 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.363699    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eecd1421-3a2f-4e48-b2b2-abcbef7869e7-lib-modules\") pod \"kindnet-7pf7v\" (UID: \"eecd1421-3a2f-4e48-b2b2-abcbef7869e7\") " pod="kube-system/kindnet-7pf7v"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.363849    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eecd1421-3a2f-4e48-b2b2-abcbef7869e7-xtables-lock\") pod \"kindnet-7pf7v\" (UID: \"eecd1421-3a2f-4e48-b2b2-abcbef7869e7\") " pod="kube-system/kindnet-7pf7v"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.363896    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eecd1421-3a2f-4e48-b2b2-abcbef7869e7-cni-cfg\") pod \"kindnet-7pf7v\" (UID: \"eecd1421-3a2f-4e48-b2b2-abcbef7869e7\") " pod="kube-system/kindnet-7pf7v"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.363942    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a396757-8954-48d2-b708-dcdfbab21dc7-xtables-lock\") pod \"kube-proxy-vskbj\" (UID: \"7a396757-8954-48d2-b708-dcdfbab21dc7\") " pod="kube-system/kube-proxy-vskbj"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.363979    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a396757-8954-48d2-b708-dcdfbab21dc7-lib-modules\") pod \"kube-proxy-vskbj\" (UID: \"7a396757-8954-48d2-b708-dcdfbab21dc7\") " pod="kube-system/kube-proxy-vskbj"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.364021    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d81e7b55-a14e-4dc7-9193-ebe6914cdacf-tmp\") pod \"storage-provisioner\" (UID: \"d81e7b55-a14e-4dc7-9193-ebe6914cdacf\") " pod="kube-system/storage-provisioner"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.381710    1572 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 17 09:12:18 ha-857000 kubelet[1572]: I0917 09:12:18.732394    1572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc1d198ffe0b277108e5042e64604ceb887f7f17cd5814893fbf789f1ce180f0"
	Sep 17 09:12:18 ha-857000 kubelet[1572]: I0917 09:12:18.754870    1572 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-857000" podUID="84b805d8-9a8f-4c6f-b18f-76c98ca4776c"
	Sep 17 09:12:18 ha-857000 kubelet[1572]: I0917 09:12:18.779039    1572 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-857000"
	Sep 17 09:12:19 ha-857000 kubelet[1572]: I0917 09:12:19.228668    1572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ca8e5543181b6f9996b6d7e435c3947" path="/var/lib/kubelet/pods/3ca8e5543181b6f9996b6d7e435c3947/volumes"
	Sep 17 09:12:19 ha-857000 kubelet[1572]: I0917 09:12:19.846405    1572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-857000" podStartSLOduration=1.846388448 podStartE2EDuration="1.846388448s" podCreationTimestamp="2024-09-17 09:12:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-17 09:12:19.829429782 +0000 UTC m=+94.772487592" watchObservedRunningTime="2024-09-17 09:12:19.846388448 +0000 UTC m=+94.789446258"
	Sep 17 09:12:45 ha-857000 kubelet[1572]: E0917 09:12:45.245854    1572 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 09:12:45 ha-857000 kubelet[1572]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 09:12:45 ha-857000 kubelet[1572]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 09:12:45 ha-857000 kubelet[1572]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 09:12:45 ha-857000 kubelet[1572]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 09:12:45 ha-857000 kubelet[1572]: I0917 09:12:45.363926    1572 scope.go:117] "RemoveContainer" containerID="fcb7038a6ac9ef515ab38df1dab73586ab93030767bab4f0d4d141f34bac886f"
	Sep 17 09:12:49 ha-857000 kubelet[1572]: I0917 09:12:49.092301    1572 scope.go:117] "RemoveContainer" containerID="611759af4bf7a8b48c2739f53afaeba3cb10af70a80bf85bfb78eebe6230c491"
	Sep 17 09:12:49 ha-857000 kubelet[1572]: I0917 09:12:49.092548    1572 scope.go:117] "RemoveContainer" containerID="67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7"
	Sep 17 09:12:49 ha-857000 kubelet[1572]: E0917 09:12:49.092633    1572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d81e7b55-a14e-4dc7-9193-ebe6914cdacf)\"" pod="kube-system/storage-provisioner" podUID="d81e7b55-a14e-4dc7-9193-ebe6914cdacf"
	Sep 17 09:13:00 ha-857000 kubelet[1572]: I0917 09:13:00.226410    1572 scope.go:117] "RemoveContainer" containerID="67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-857000 -n ha-857000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-857000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (160.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (4.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:413: expected profile "ha-857000" in json of 'profile list' to have "Degraded" status but have "HAppy" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-857000\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-857000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-857000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Kubern
etesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":fal
se,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\
"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-857000 -n ha-857000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-857000 logs -n 25: (3.583705356s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-857000 cp ha-857000-m03:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04:/home/docker/cp-test_ha-857000-m03_ha-857000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m04 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m03_ha-857000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-857000 cp testdata/cp-test.txt                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3490570692/001/cp-test_ha-857000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000:/home/docker/cp-test_ha-857000-m04_ha-857000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000 sudo cat                                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m02:/home/docker/cp-test_ha-857000-m04_ha-857000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m02 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m03:/home/docker/cp-test_ha-857000-m04_ha-857000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m03 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-857000 node stop m02 -v=7                                                                                                 | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-857000 node start m02 -v=7                                                                                                | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:05 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-857000 -v=7                                                                                                       | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:05 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-857000 -v=7                                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:05 PDT | 17 Sep 24 02:06 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-857000 --wait=true -v=7                                                                                                | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:06 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-857000                                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:07 PDT |                     |
	| node    | ha-857000 node delete m03 -v=7                                                                                               | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:07 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-857000 stop -v=7                                                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:07 PDT | 17 Sep 24 02:10 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-857000 --wait=true                                                                                                     | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:10 PDT | 17 Sep 24 02:13 PDT |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 02:10:27
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 02:10:27.105477    4110 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:10:27.105665    4110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:10:27.105670    4110 out.go:358] Setting ErrFile to fd 2...
	I0917 02:10:27.105674    4110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:10:27.105845    4110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:10:27.107332    4110 out.go:352] Setting JSON to false
	I0917 02:10:27.130053    4110 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2397,"bootTime":1726561830,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 02:10:27.130205    4110 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:10:27.152188    4110 out.go:177] * [ha-857000] minikube v1.34.0 on Darwin 14.6.1
	I0917 02:10:27.194040    4110 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:10:27.194117    4110 notify.go:220] Checking for updates...
	I0917 02:10:27.238575    4110 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:10:27.259736    4110 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 02:10:27.280930    4110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:10:27.301762    4110 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:10:27.322633    4110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:10:27.344421    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:10:27.344920    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:27.344973    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:27.354413    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52088
	I0917 02:10:27.354771    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:27.355142    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:10:27.355153    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:27.355356    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:27.355460    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:27.355684    4110 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:10:27.355976    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:27.356005    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:27.364420    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52090
	I0917 02:10:27.364811    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:27.365167    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:10:27.365180    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:27.365391    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:27.365504    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:27.393706    4110 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 02:10:27.435894    4110 start.go:297] selected driver: hyperkit
	I0917 02:10:27.435922    4110 start.go:901] validating driver "hyperkit" against &{Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:10:27.436195    4110 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:10:27.436329    4110 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:10:27.436542    4110 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19648-1025/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 02:10:27.445831    4110 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 02:10:27.449537    4110 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:27.449556    4110 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 02:10:27.452252    4110 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:10:27.452291    4110 cni.go:84] Creating CNI manager for ""
	I0917 02:10:27.452327    4110 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 02:10:27.452403    4110 start.go:340] cluster config:
	{Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:10:27.452523    4110 iso.go:125] acquiring lock: {Name:mkc407d80a0f8e78f0e24d63c464bb315c80139b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:10:27.494874    4110 out.go:177] * Starting "ha-857000" primary control-plane node in "ha-857000" cluster
	I0917 02:10:27.515806    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:10:27.515897    4110 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 02:10:27.515918    4110 cache.go:56] Caching tarball of preloaded images
	I0917 02:10:27.516138    4110 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:10:27.516158    4110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:10:27.516383    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:27.517269    4110 start.go:360] acquireMachinesLock for ha-857000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:10:27.517388    4110 start.go:364] duration metric: took 96.177µs to acquireMachinesLock for "ha-857000"
	I0917 02:10:27.517441    4110 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:10:27.517460    4110 fix.go:54] fixHost starting: 
	I0917 02:10:27.517898    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:27.517930    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:27.526784    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52092
	I0917 02:10:27.527129    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:27.527462    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:10:27.527473    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:27.527739    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:27.527880    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:27.527995    4110 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:10:27.528094    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:27.528210    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 3964
	I0917 02:10:27.529100    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid 3964 missing from process table
	I0917 02:10:27.529122    4110 fix.go:112] recreateIfNeeded on ha-857000: state=Stopped err=<nil>
	I0917 02:10:27.529141    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	W0917 02:10:27.529225    4110 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:10:27.570570    4110 out.go:177] * Restarting existing hyperkit VM for "ha-857000" ...
	I0917 02:10:27.591801    4110 main.go:141] libmachine: (ha-857000) Calling .Start
	I0917 02:10:27.592089    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:27.592131    4110 main.go:141] libmachine: (ha-857000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid
	I0917 02:10:27.592193    4110 main.go:141] libmachine: (ha-857000) DBG | Using UUID f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c
	I0917 02:10:27.699994    4110 main.go:141] libmachine: (ha-857000) DBG | Generated MAC c2:63:2b:63:80:76
	I0917 02:10:27.700019    4110 main.go:141] libmachine: (ha-857000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:10:27.700136    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b6e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:10:27.700165    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b6e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:10:27.700210    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/ha-857000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:10:27.700256    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/ha-857000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:10:27.700270    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:10:27.701709    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: Pid is 4124
	I0917 02:10:27.702059    4110 main.go:141] libmachine: (ha-857000) DBG | Attempt 0
	I0917 02:10:27.702070    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:27.702132    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 4124
	I0917 02:10:27.703343    4110 main.go:141] libmachine: (ha-857000) DBG | Searching for c2:63:2b:63:80:76 in /var/db/dhcpd_leases ...
	I0917 02:10:27.703398    4110 main.go:141] libmachine: (ha-857000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:10:27.703416    4110 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66e94781}
	I0917 02:10:27.703422    4110 main.go:141] libmachine: (ha-857000) DBG | Found match: c2:63:2b:63:80:76
	I0917 02:10:27.703434    4110 main.go:141] libmachine: (ha-857000) DBG | IP: 192.169.0.5
	I0917 02:10:27.703500    4110 main.go:141] libmachine: (ha-857000) Calling .GetConfigRaw
	I0917 02:10:27.704135    4110 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:10:27.704313    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:27.704745    4110 machine.go:93] provisionDockerMachine start ...
	I0917 02:10:27.704755    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:27.704862    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:27.704967    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:27.705062    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:27.705172    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:27.705289    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:27.705426    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:27.705645    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:27.705655    4110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:10:27.709824    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:10:27.761328    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:10:27.762023    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:10:27.762037    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:10:27.762058    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:10:27.762068    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:10:28.142704    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:10:28.142720    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:10:28.257454    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:10:28.257477    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:10:28.257500    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:10:28.257510    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:10:28.258332    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:10:28.258356    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:10:33.845455    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:33 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:10:33.845506    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:33 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:10:33.845516    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:33 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:10:33.869458    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:33 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:10:38.774269    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:10:38.774287    4110 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:10:38.774460    4110 buildroot.go:166] provisioning hostname "ha-857000"
	I0917 02:10:38.774470    4110 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:10:38.774556    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:38.774689    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:38.774787    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.774865    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.774959    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:38.775097    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:38.775254    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:38.775262    4110 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000 && echo "ha-857000" | sudo tee /etc/hostname
	I0917 02:10:38.842954    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000
	
	I0917 02:10:38.842972    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:38.843114    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:38.843224    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.843309    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.843398    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:38.843557    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:38.843701    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:38.843712    4110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:10:38.908790    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:10:38.908811    4110 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:10:38.908824    4110 buildroot.go:174] setting up certificates
	I0917 02:10:38.908830    4110 provision.go:84] configureAuth start
	I0917 02:10:38.908845    4110 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:10:38.908979    4110 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:10:38.909073    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:38.909177    4110 provision.go:143] copyHostCerts
	I0917 02:10:38.909208    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:10:38.909278    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:10:38.909287    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:10:38.909606    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:10:38.909812    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:10:38.909853    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:10:38.909857    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:10:38.909935    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:10:38.910085    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:10:38.910127    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:10:38.910132    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:10:38.910214    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:10:38.910362    4110 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000 san=[127.0.0.1 192.169.0.5 ha-857000 localhost minikube]
	I0917 02:10:38.962566    4110 provision.go:177] copyRemoteCerts
	I0917 02:10:38.962618    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:10:38.962632    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:38.962737    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:38.962836    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.962932    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:38.963020    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:38.998776    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:10:38.998851    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:10:39.018683    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:10:39.018741    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0917 02:10:39.038754    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:10:39.038814    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:10:39.058064    4110 provision.go:87] duration metric: took 149.217348ms to configureAuth
	I0917 02:10:39.058076    4110 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:10:39.058257    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:10:39.058270    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:39.058416    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:39.058513    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:39.058598    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.058689    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.058780    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:39.058915    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:39.059035    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:39.059042    4110 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:10:39.117847    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:10:39.117859    4110 buildroot.go:70] root file system type: tmpfs
	I0917 02:10:39.117937    4110 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:10:39.117952    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:39.118078    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:39.118171    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.118258    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.118338    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:39.118469    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:39.118616    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:39.118663    4110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:10:39.186097    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:10:39.186120    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:39.186247    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:39.186347    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.186426    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.186527    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:39.186659    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:39.186806    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:39.186817    4110 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:10:40.814202    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:10:40.814217    4110 machine.go:96] duration metric: took 13.109237782s to provisionDockerMachine
	I0917 02:10:40.814229    4110 start.go:293] postStartSetup for "ha-857000" (driver="hyperkit")
	I0917 02:10:40.814236    4110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:10:40.814246    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:40.814438    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:10:40.814456    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:40.814571    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:40.814667    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:40.814762    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:40.814848    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:40.854204    4110 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:10:40.857656    4110 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:10:40.857668    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:10:40.857773    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:10:40.857955    4110 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:10:40.857962    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:10:40.858166    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:10:40.867201    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:10:40.895727    4110 start.go:296] duration metric: took 81.487995ms for postStartSetup
	I0917 02:10:40.895754    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:40.895937    4110 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:10:40.895964    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:40.896062    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:40.896140    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:40.896211    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:40.896292    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:40.931812    4110 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:10:40.931872    4110 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:10:40.965671    4110 fix.go:56] duration metric: took 13.447980679s for fixHost
	I0917 02:10:40.965693    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:40.965831    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:40.965924    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:40.966013    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:40.966122    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:40.966261    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:40.966403    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:40.966410    4110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:10:41.023835    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726564240.935930388
	
	I0917 02:10:41.023847    4110 fix.go:216] guest clock: 1726564240.935930388
	I0917 02:10:41.023853    4110 fix.go:229] Guest: 2024-09-17 02:10:40.935930388 -0700 PDT Remote: 2024-09-17 02:10:40.965683 -0700 PDT m=+13.896006994 (delta=-29.752612ms)
	I0917 02:10:41.023870    4110 fix.go:200] guest clock delta is within tolerance: -29.752612ms
	I0917 02:10:41.023873    4110 start.go:83] releasing machines lock for "ha-857000", held for 13.506240986s
	I0917 02:10:41.023893    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:41.024017    4110 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:10:41.024124    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:41.024416    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:41.024496    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:41.024577    4110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:10:41.024607    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:41.024622    4110 ssh_runner.go:195] Run: cat /version.json
	I0917 02:10:41.024633    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:41.024692    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:41.024731    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:41.024799    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:41.024812    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:41.024882    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:41.024908    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:41.025002    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:41.025031    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:41.057444    4110 ssh_runner.go:195] Run: systemctl --version
	I0917 02:10:41.119261    4110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 02:10:41.123760    4110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:10:41.123809    4110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:10:41.136297    4110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:10:41.136307    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:10:41.136412    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:10:41.153182    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:10:41.162387    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:10:41.171363    4110 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:10:41.171411    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:10:41.180339    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:10:41.189205    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:10:41.198331    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:10:41.207214    4110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:10:41.216288    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:10:41.225185    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:10:41.234170    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:10:41.243192    4110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:10:41.251363    4110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:10:41.259648    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:41.359254    4110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:10:41.378053    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:10:41.378144    4110 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:10:41.391608    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:10:41.406431    4110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:10:41.426598    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:10:41.437654    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:10:41.448507    4110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:10:41.470118    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:10:41.481632    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:10:41.496609    4110 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:10:41.499690    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:10:41.507723    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:10:41.520894    4110 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:10:41.633690    4110 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:10:41.735063    4110 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:10:41.735129    4110 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:10:41.749181    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:41.842846    4110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:10:44.137188    4110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.294283491s)
	I0917 02:10:44.137256    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:10:44.147554    4110 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:10:44.160480    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:10:44.170998    4110 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:10:44.262329    4110 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:10:44.355414    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:44.456404    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:10:44.470268    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:10:44.481488    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:44.585298    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:10:44.651024    4110 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:10:44.651127    4110 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:10:44.655468    4110 start.go:563] Will wait 60s for crictl version
	I0917 02:10:44.655523    4110 ssh_runner.go:195] Run: which crictl
	I0917 02:10:44.660816    4110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:10:44.685805    4110 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:10:44.685900    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:10:44.701620    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:10:44.762577    4110 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:10:44.762643    4110 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:10:44.763055    4110 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:10:44.767764    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:10:44.778676    4110 kubeadm.go:883] updating cluster {Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 02:10:44.778770    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:10:44.778845    4110 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:10:44.792490    4110 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:10:44.792502    4110 docker.go:615] Images already preloaded, skipping extraction
	I0917 02:10:44.792587    4110 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:10:44.806122    4110 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:10:44.806141    4110 cache_images.go:84] Images are preloaded, skipping loading
	I0917 02:10:44.806152    4110 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0917 02:10:44.806226    4110 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:10:44.806308    4110 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 02:10:44.838425    4110 cni.go:84] Creating CNI manager for ""
	I0917 02:10:44.838438    4110 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 02:10:44.838451    4110 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 02:10:44.838467    4110 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-857000 NodeName:ha-857000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 02:10:44.838548    4110 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-857000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 02:10:44.838565    4110 kube-vip.go:115] generating kube-vip config ...
	I0917 02:10:44.838624    4110 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 02:10:44.852006    4110 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 02:10:44.852072    4110 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 02:10:44.852126    4110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:10:44.861875    4110 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:10:44.861926    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 02:10:44.870065    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0917 02:10:44.883323    4110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:10:44.896671    4110 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0917 02:10:44.910190    4110 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 02:10:44.923776    4110 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:10:44.926683    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:10:44.936751    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:45.031050    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:10:45.045803    4110 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.5
	I0917 02:10:45.045815    4110 certs.go:194] generating shared ca certs ...
	I0917 02:10:45.045826    4110 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:10:45.046013    4110 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:10:45.046090    4110 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:10:45.046101    4110 certs.go:256] generating profile certs ...
	I0917 02:10:45.046208    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key
	I0917 02:10:45.046290    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea
	I0917 02:10:45.046357    4110 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key
	I0917 02:10:45.046364    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:10:45.046385    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:10:45.046406    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:10:45.046424    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:10:45.046442    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 02:10:45.046474    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 02:10:45.046503    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 02:10:45.046520    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 02:10:45.046624    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:10:45.046679    4110 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:10:45.046688    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:10:45.046749    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:10:45.046790    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:10:45.046829    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:10:45.046908    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:10:45.046945    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:10:45.046966    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:10:45.046984    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:10:45.047483    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:10:45.080356    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:10:45.112920    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:10:45.138450    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:10:45.175252    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 02:10:45.218044    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 02:10:45.251977    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:10:45.309085    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 02:10:45.353596    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:10:45.384476    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:10:45.404778    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:10:45.423525    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 02:10:45.437207    4110 ssh_runner.go:195] Run: openssl version
	I0917 02:10:45.441704    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:10:45.450346    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:10:45.453899    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:10:45.453945    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:10:45.458361    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:10:45.466854    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:10:45.475379    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:10:45.478924    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:10:45.478963    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:10:45.483279    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:10:45.491638    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:10:45.500375    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:10:45.504070    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:10:45.504128    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:10:45.508583    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:10:45.516977    4110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:10:45.520582    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:10:45.524889    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:10:45.529282    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:10:45.533668    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:10:45.538022    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:10:45.542262    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:10:45.546447    4110 kubeadm.go:392] StartCluster: {Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:10:45.546579    4110 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 02:10:45.558935    4110 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 02:10:45.566714    4110 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 02:10:45.566724    4110 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 02:10:45.566760    4110 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 02:10:45.574257    4110 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:10:45.574553    4110 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-857000" does not appear in /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:10:45.574638    4110 kubeconfig.go:62] /Users/jenkins/minikube-integration/19648-1025/kubeconfig needs updating (will repair): [kubeconfig missing "ha-857000" cluster setting kubeconfig missing "ha-857000" context setting]
	I0917 02:10:45.574818    4110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:10:45.575437    4110 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:10:45.575640    4110 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x673a720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:10:45.575954    4110 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 02:10:45.576155    4110 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 02:10:45.583535    4110 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0917 02:10:45.583548    4110 kubeadm.go:597] duration metric: took 16.820219ms to restartPrimaryControlPlane
	I0917 02:10:45.583553    4110 kubeadm.go:394] duration metric: took 37.114772ms to StartCluster
	I0917 02:10:45.583562    4110 settings.go:142] acquiring lock: {Name:mk5de70c670a23cf49fdbd19ddb5c1d2a9de9a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:10:45.583637    4110 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:10:45.584029    4110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:10:45.584244    4110 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:10:45.584257    4110 start.go:241] waiting for startup goroutines ...
	I0917 02:10:45.584290    4110 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 02:10:45.584399    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:10:45.629290    4110 out.go:177] * Enabled addons: 
	I0917 02:10:45.650483    4110 addons.go:510] duration metric: took 66.114939ms for enable addons: enabled=[]
	I0917 02:10:45.650526    4110 start.go:246] waiting for cluster config update ...
	I0917 02:10:45.650541    4110 start.go:255] writing updated cluster config ...
	I0917 02:10:45.672110    4110 out.go:201] 
	I0917 02:10:45.693671    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:10:45.693812    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:45.716376    4110 out.go:177] * Starting "ha-857000-m02" control-plane node in "ha-857000" cluster
	I0917 02:10:45.758138    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:10:45.758205    4110 cache.go:56] Caching tarball of preloaded images
	I0917 02:10:45.758422    4110 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:10:45.758440    4110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:10:45.758566    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:45.759523    4110 start.go:360] acquireMachinesLock for ha-857000-m02: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:10:45.759643    4110 start.go:364] duration metric: took 94.526µs to acquireMachinesLock for "ha-857000-m02"
	I0917 02:10:45.759684    4110 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:10:45.759694    4110 fix.go:54] fixHost starting: m02
	I0917 02:10:45.760135    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:45.760170    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:45.769422    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52114
	I0917 02:10:45.769778    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:45.770120    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:10:45.770130    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:45.770332    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:45.770446    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:10:45.770540    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetState
	I0917 02:10:45.770620    4110 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:45.770696    4110 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 3976
	I0917 02:10:45.771617    4110 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid 3976 missing from process table
	I0917 02:10:45.771641    4110 fix.go:112] recreateIfNeeded on ha-857000-m02: state=Stopped err=<nil>
	I0917 02:10:45.771648    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	W0917 02:10:45.771734    4110 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:10:45.793214    4110 out.go:177] * Restarting existing hyperkit VM for "ha-857000-m02" ...
	I0917 02:10:45.835194    4110 main.go:141] libmachine: (ha-857000-m02) Calling .Start
	I0917 02:10:45.835422    4110 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:45.835478    4110 main.go:141] libmachine: (ha-857000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid
	I0917 02:10:45.836481    4110 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid 3976 missing from process table
	I0917 02:10:45.836493    4110 main.go:141] libmachine: (ha-857000-m02) DBG | pid 3976 is in state "Stopped"
	I0917 02:10:45.836506    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid...
	I0917 02:10:45.836730    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Using UUID 1940a045-0b1b-4b28-842d-d4858a62cbd3
	I0917 02:10:45.862461    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Generated MAC 9a:95:4e:4b:65:fe
	I0917 02:10:45.862487    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:10:45.862599    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1940a045-0b1b-4b28-842d-d4858a62cbd3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:10:45.862645    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1940a045-0b1b-4b28-842d-d4858a62cbd3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:10:45.862683    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1940a045-0b1b-4b28-842d-d4858a62cbd3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/ha-857000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machine
s/ha-857000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:10:45.862720    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1940a045-0b1b-4b28-842d-d4858a62cbd3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/ha-857000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:10:45.862741    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:10:45.864138    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: Pid is 4131
	I0917 02:10:45.864563    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Attempt 0
	I0917 02:10:45.864573    4110 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:45.864635    4110 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 4131
	I0917 02:10:45.866426    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Searching for 9a:95:4e:4b:65:fe in /var/db/dhcpd_leases ...
	I0917 02:10:45.866511    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:10:45.866527    4110 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:10:45.866546    4110 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea9817}
	I0917 02:10:45.866556    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Found match: 9a:95:4e:4b:65:fe
	I0917 02:10:45.866585    4110 main.go:141] libmachine: (ha-857000-m02) DBG | IP: 192.169.0.6
	I0917 02:10:45.866617    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetConfigRaw
	I0917 02:10:45.867379    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:10:45.867624    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:45.868172    4110 machine.go:93] provisionDockerMachine start ...
	I0917 02:10:45.868192    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:10:45.868319    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:10:45.868433    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:10:45.868540    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:10:45.868629    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:10:45.868743    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:10:45.868892    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:45.869038    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:10:45.869047    4110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:10:45.871979    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:10:45.880237    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:10:45.881261    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:10:45.881280    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:10:45.881317    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:10:45.881331    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:10:46.263104    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:10:46.263119    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:10:46.377844    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:10:46.377864    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:10:46.377874    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:10:46.377890    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:10:46.378727    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:10:46.378736    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:10:51.977750    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:51 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 02:10:51.977833    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:51 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 02:10:51.977841    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:51 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 02:10:52.002295    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:52 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 02:11:20.931384    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:11:20.931398    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:11:20.931549    4110 buildroot.go:166] provisioning hostname "ha-857000-m02"
	I0917 02:11:20.931560    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:11:20.931664    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:20.931762    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:20.931855    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:20.931937    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:20.932033    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:20.932169    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:20.932351    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:20.932359    4110 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000-m02 && echo "ha-857000-m02" | sudo tee /etc/hostname
	I0917 02:11:20.993183    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000-m02
	
	I0917 02:11:20.993198    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:20.993326    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:20.993440    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:20.993526    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:20.993618    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:20.993763    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:20.993914    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:20.993925    4110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:11:21.050925    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:11:21.050951    4110 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:11:21.050960    4110 buildroot.go:174] setting up certificates
	I0917 02:11:21.050966    4110 provision.go:84] configureAuth start
	I0917 02:11:21.050972    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:11:21.051109    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:11:21.051192    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.051304    4110 provision.go:143] copyHostCerts
	I0917 02:11:21.051330    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:11:21.051388    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:11:21.051394    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:11:21.051551    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:11:21.051732    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:11:21.051778    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:11:21.051784    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:11:21.051862    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:11:21.051999    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:11:21.052037    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:11:21.052041    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:11:21.052127    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:11:21.052261    4110 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000-m02 san=[127.0.0.1 192.169.0.6 ha-857000-m02 localhost minikube]
	I0917 02:11:21.131473    4110 provision.go:177] copyRemoteCerts
	I0917 02:11:21.131534    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:11:21.131551    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.131683    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:21.131772    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.131866    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:21.131988    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:11:21.165457    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:11:21.165530    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:11:21.185353    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:11:21.185424    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 02:11:21.204885    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:11:21.204944    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 02:11:21.224555    4110 provision.go:87] duration metric: took 173.578725ms to configureAuth
	I0917 02:11:21.224572    4110 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:11:21.224752    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:21.224765    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:21.224898    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.224985    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:21.225071    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.225151    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.225226    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:21.225334    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:21.225453    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:21.225471    4110 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:11:21.276594    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:11:21.276610    4110 buildroot.go:70] root file system type: tmpfs
	I0917 02:11:21.276682    4110 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:11:21.276692    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.276824    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:21.276911    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.276982    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.277068    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:21.277206    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:21.277343    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:21.277390    4110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:11:21.338440    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:11:21.338457    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.338602    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:21.338693    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.338786    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.338878    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:21.339018    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:21.339165    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:21.339180    4110 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:11:23.000541    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:11:23.000557    4110 machine.go:96] duration metric: took 37.131734761s to provisionDockerMachine
	I0917 02:11:23.000565    4110 start.go:293] postStartSetup for "ha-857000-m02" (driver="hyperkit")
	I0917 02:11:23.000572    4110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:11:23.000581    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.000771    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:11:23.000784    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.000877    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.000970    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.001060    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.001151    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:11:23.034070    4110 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:11:23.037044    4110 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:11:23.037054    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:11:23.037149    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:11:23.037326    4110 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:11:23.037333    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:11:23.037542    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:11:23.045540    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:11:23.064134    4110 start.go:296] duration metric: took 63.560241ms for postStartSetup
	I0917 02:11:23.064153    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.064355    4110 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:11:23.064367    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.064443    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.064537    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.064625    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.064699    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:11:23.096648    4110 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:11:23.096719    4110 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:11:23.150750    4110 fix.go:56] duration metric: took 37.39040777s for fixHost
	I0917 02:11:23.150781    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.150933    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.151043    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.151139    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.151225    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.151344    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:23.151480    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:23.151487    4110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:11:23.205108    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726564282.931256187
	
	I0917 02:11:23.205121    4110 fix.go:216] guest clock: 1726564282.931256187
	I0917 02:11:23.205126    4110 fix.go:229] Guest: 2024-09-17 02:11:22.931256187 -0700 PDT Remote: 2024-09-17 02:11:23.150765 -0700 PDT m=+56.080359699 (delta=-219.508813ms)
	I0917 02:11:23.205134    4110 fix.go:200] guest clock delta is within tolerance: -219.508813ms
	I0917 02:11:23.205138    4110 start.go:83] releasing machines lock for "ha-857000-m02", held for 37.444836088s
	I0917 02:11:23.205153    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.205283    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:11:23.226836    4110 out.go:177] * Found network options:
	I0917 02:11:23.247780    4110 out.go:177]   - NO_PROXY=192.169.0.5
	W0917 02:11:23.268466    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:23.268508    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.269341    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.269597    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.269778    4110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0917 02:11:23.269794    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:23.269828    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.269896    4110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:11:23.269915    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.270129    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.270153    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.270351    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.270407    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.270526    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.270571    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.270741    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:11:23.270760    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	W0917 02:11:23.355936    4110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:11:23.356046    4110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:11:23.371785    4110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:11:23.371805    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:11:23.371897    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:11:23.389343    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:11:23.397507    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:11:23.405706    4110 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:11:23.405760    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:11:23.413954    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:11:23.422064    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:11:23.430077    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:11:23.438247    4110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:11:23.446615    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:11:23.455025    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:11:23.463904    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:11:23.472877    4110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:11:23.480886    4110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:11:23.488979    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:23.586431    4110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:11:23.605512    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:11:23.605590    4110 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:11:23.619031    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:11:23.632481    4110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:11:23.650301    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:11:23.661034    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:11:23.671499    4110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:11:23.693809    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:11:23.704324    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:11:23.719425    4110 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:11:23.722279    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:11:23.729409    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:11:23.743121    4110 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:11:23.848749    4110 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:11:23.947630    4110 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:11:23.947661    4110 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:11:23.965207    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:24.060164    4110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:11:26.333778    4110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.273556023s)
	I0917 02:11:26.333847    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:11:26.345198    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:11:26.355965    4110 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:11:26.461793    4110 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:11:26.556361    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:26.674366    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:11:26.687753    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:11:26.697698    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:26.797118    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:11:26.861306    4110 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:11:26.861392    4110 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:11:26.865857    4110 start.go:563] Will wait 60s for crictl version
	I0917 02:11:26.865915    4110 ssh_runner.go:195] Run: which crictl
	I0917 02:11:26.869732    4110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:11:26.894886    4110 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:11:26.894999    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:11:26.911893    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:11:26.950833    4110 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:11:26.972458    4110 out.go:177]   - env NO_PROXY=192.169.0.5
	I0917 02:11:26.993284    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:11:26.993711    4110 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:11:26.998329    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:11:27.008512    4110 mustload.go:65] Loading cluster: ha-857000
	I0917 02:11:27.008684    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:27.008920    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:11:27.008943    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:11:27.017607    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52136
	I0917 02:11:27.017941    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:11:27.018292    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:11:27.018310    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:11:27.018503    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:11:27.018620    4110 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:11:27.018699    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:11:27.018771    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 4124
	I0917 02:11:27.019715    4110 host.go:66] Checking if "ha-857000" exists ...
	I0917 02:11:27.019989    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:11:27.020015    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:11:27.028562    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52138
	I0917 02:11:27.028902    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:11:27.029241    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:11:27.029257    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:11:27.029461    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:11:27.029566    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:11:27.029665    4110 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.6
	I0917 02:11:27.029672    4110 certs.go:194] generating shared ca certs ...
	I0917 02:11:27.029680    4110 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:11:27.029857    4110 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:11:27.029930    4110 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:11:27.029938    4110 certs.go:256] generating profile certs ...
	I0917 02:11:27.030058    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key
	I0917 02:11:27.030140    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.d3e75930
	I0917 02:11:27.030214    4110 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key
	I0917 02:11:27.030221    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:11:27.030242    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:11:27.030266    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:11:27.030285    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:11:27.030303    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 02:11:27.030337    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 02:11:27.030366    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 02:11:27.030389    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 02:11:27.030486    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:11:27.030540    4110 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:11:27.030549    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:11:27.030587    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:11:27.030621    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:11:27.030651    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:11:27.030716    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:11:27.030753    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:11:27.030774    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:11:27.030792    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:11:27.030816    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:11:27.030911    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:11:27.031000    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:11:27.031078    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:11:27.031162    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:11:27.058778    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 02:11:27.062313    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 02:11:27.070939    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 02:11:27.074280    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 02:11:27.083003    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 02:11:27.086057    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 02:11:27.094554    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 02:11:27.097659    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 02:11:27.106657    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 02:11:27.109894    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 02:11:27.118370    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 02:11:27.121478    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 02:11:27.130386    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:11:27.150256    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:11:27.169526    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:11:27.188769    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:11:27.207966    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 02:11:27.227067    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 02:11:27.246289    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:11:27.265271    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 02:11:27.284669    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:11:27.303761    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:11:27.323113    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:11:27.342331    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 02:11:27.355765    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 02:11:27.369277    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 02:11:27.382837    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 02:11:27.396474    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 02:11:27.410313    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 02:11:27.423731    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 02:11:27.437366    4110 ssh_runner.go:195] Run: openssl version
	I0917 02:11:27.441447    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:11:27.450619    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:11:27.453941    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:11:27.453997    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:11:27.458171    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:11:27.467199    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:11:27.476144    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:11:27.479431    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:11:27.479473    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:11:27.483603    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:11:27.492580    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:11:27.501517    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:11:27.504871    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:11:27.504915    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:11:27.509027    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:11:27.517892    4110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:11:27.521155    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:11:27.525378    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:11:27.529633    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:11:27.533810    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:11:27.538003    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:11:27.542137    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:11:27.546288    4110 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.1 docker true true} ...
	I0917 02:11:27.546336    4110 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:11:27.546350    4110 kube-vip.go:115] generating kube-vip config ...
	I0917 02:11:27.546384    4110 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 02:11:27.558948    4110 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 02:11:27.558990    4110 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 02:11:27.559048    4110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:11:27.568292    4110 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:11:27.568351    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 02:11:27.577686    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 02:11:27.591394    4110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:11:27.604835    4110 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 02:11:27.618390    4110 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:11:27.621271    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:11:27.630851    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:27.729065    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:11:27.743762    4110 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:11:27.743972    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:27.765105    4110 out.go:177] * Verifying Kubernetes components...
	I0917 02:11:27.805899    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:27.933521    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:11:27.948089    4110 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:11:27.948282    4110 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x673a720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 02:11:27.948321    4110 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0917 02:11:27.948495    4110 node_ready.go:35] waiting up to 6m0s for node "ha-857000-m02" to be "Ready" ...
	I0917 02:11:27.948579    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:27.948584    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:27.948591    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:27.948595    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:28.948736    4110 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0917 02:11:28.948861    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:28.948870    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:28.948878    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:28.948882    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.256443    4110 round_trippers.go:574] Response Status: 200 OK in 7307 milliseconds
	I0917 02:11:36.257038    4110 node_ready.go:49] node "ha-857000-m02" has status "Ready":"True"
	I0917 02:11:36.257051    4110 node_ready.go:38] duration metric: took 8.308394835s for node "ha-857000-m02" to be "Ready" ...
	I0917 02:11:36.257061    4110 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:11:36.257098    4110 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 02:11:36.257107    4110 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 02:11:36.257147    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:36.257152    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.257158    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.257164    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.271996    4110 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0917 02:11:36.280676    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.280736    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:11:36.280742    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.280752    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.280756    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.307985    4110 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0917 02:11:36.308476    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.308484    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.308491    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.308501    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.312984    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:11:36.313392    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.313402    4110 pod_ready.go:82] duration metric: took 32.709315ms for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.313409    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.313452    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nl5j5
	I0917 02:11:36.313457    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.313463    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.313468    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.319771    4110 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 02:11:36.320384    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.320393    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.320400    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.320403    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.322816    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:36.323378    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.323388    4110 pod_ready.go:82] duration metric: took 9.97387ms for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.323395    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.323435    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000
	I0917 02:11:36.323440    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.323446    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.323450    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.327486    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:11:36.328047    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.328054    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.328060    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.328063    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.331571    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:36.332110    4110 pod_ready.go:93] pod "etcd-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.332121    4110 pod_ready.go:82] duration metric: took 8.720083ms for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.332128    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.332168    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m02
	I0917 02:11:36.332173    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.332179    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.332184    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.336324    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:11:36.336846    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:36.336854    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.336860    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.336864    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.340608    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:36.341048    4110 pod_ready.go:93] pod "etcd-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.341057    4110 pod_ready.go:82] duration metric: took 8.92351ms for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.341064    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.341104    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m03
	I0917 02:11:36.341110    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.341116    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.341121    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.343462    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:36.458248    4110 request.go:632] Waited for 114.333049ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:36.458307    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:36.458312    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.458318    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.458326    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.466021    4110 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 02:11:36.466526    4110 pod_ready.go:93] pod "etcd-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.466536    4110 pod_ready.go:82] duration metric: took 125.46489ms for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.466548    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.657514    4110 request.go:632] Waited for 190.921312ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:11:36.657567    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:11:36.657574    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.657579    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.657584    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.659804    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:36.857671    4110 request.go:632] Waited for 197.395211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.857701    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.857705    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.857711    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.857715    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.861065    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:36.861653    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.861669    4110 pod_ready.go:82] duration metric: took 395.104039ms for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.861677    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.057332    4110 request.go:632] Waited for 195.603008ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:11:37.057382    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:11:37.057387    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.057393    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.057398    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.060216    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:37.258671    4110 request.go:632] Waited for 197.954534ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:37.258706    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:37.258713    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.258721    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.258727    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.267718    4110 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 02:11:37.268069    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:37.268082    4110 pod_ready.go:82] duration metric: took 406.392892ms for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.268090    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.457925    4110 request.go:632] Waited for 189.791882ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:11:37.457975    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:11:37.457980    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.457987    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.457992    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.461663    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:37.658806    4110 request.go:632] Waited for 196.487027ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:37.658861    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:37.658867    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.658874    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.658878    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.661429    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:37.661888    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:37.661897    4110 pod_ready.go:82] duration metric: took 393.794602ms for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.661905    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.857414    4110 request.go:632] Waited for 195.469923ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:11:37.857468    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:11:37.857474    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.857481    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.857486    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.860019    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.057880    4110 request.go:632] Waited for 197.333642ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:38.057910    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:38.057915    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.057922    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.057927    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.060540    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.061091    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:38.061101    4110 pod_ready.go:82] duration metric: took 399.184022ms for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:38.061109    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:38.257757    4110 request.go:632] Waited for 196.608954ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:11:38.257857    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:11:38.257871    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.257877    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.257882    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.259904    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.458082    4110 request.go:632] Waited for 197.709678ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:38.458138    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:38.458147    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.458154    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.458158    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.460347    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.460715    4110 pod_ready.go:98] node "ha-857000-m02" hosting pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:38.460726    4110 pod_ready.go:82] duration metric: took 399.604676ms for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	E0917 02:11:38.460732    4110 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-857000-m02" hosting pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:38.460739    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:38.658188    4110 request.go:632] Waited for 197.403717ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:11:38.658255    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:11:38.658261    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.658267    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.658271    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.660934    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.857786    4110 request.go:632] Waited for 196.168284ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:38.857841    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:38.857851    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.857863    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.857873    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.861470    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:38.861751    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:38.861759    4110 pod_ready.go:82] duration metric: took 401.003253ms for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:38.861766    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.057800    4110 request.go:632] Waited for 195.986319ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:11:39.057882    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:11:39.057893    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.057904    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.057912    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.061639    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:39.257697    4110 request.go:632] Waited for 195.312452ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:11:39.257726    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:11:39.257731    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.257737    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.257741    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.260209    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:39.260462    4110 pod_ready.go:93] pod "kube-proxy-528ht" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:39.260471    4110 pod_ready.go:82] duration metric: took 398.692905ms for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.260478    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.459321    4110 request.go:632] Waited for 198.788481ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:11:39.459387    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:11:39.459394    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.459411    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.459422    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.461885    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:39.657441    4110 request.go:632] Waited for 195.121107ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:39.657541    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:39.657551    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.657579    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.657585    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.661441    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:39.661929    4110 pod_ready.go:93] pod "kube-proxy-g9wxm" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:39.661942    4110 pod_ready.go:82] duration metric: took 401.451734ms for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.661951    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.857721    4110 request.go:632] Waited for 195.727193ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:11:39.857785    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:11:39.857791    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.857797    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.857802    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.859663    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:11:40.058574    4110 request.go:632] Waited for 198.443343ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:40.058668    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:40.058679    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.058690    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.058699    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.062499    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:40.063124    4110 pod_ready.go:93] pod "kube-proxy-vskbj" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:40.063133    4110 pod_ready.go:82] duration metric: took 401.170349ms for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:40.063140    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:40.257873    4110 request.go:632] Waited for 194.653928ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:11:40.257927    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:11:40.257937    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.257948    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.257956    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.262255    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:11:40.458287    4110 request.go:632] Waited for 195.380222ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:40.458411    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:40.458421    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.458432    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.458443    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.462171    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:40.462629    4110 pod_ready.go:98] node "ha-857000-m02" hosting pod "kube-proxy-zrqvr" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:40.462643    4110 pod_ready.go:82] duration metric: took 399.490798ms for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	E0917 02:11:40.462673    4110 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-857000-m02" hosting pod "kube-proxy-zrqvr" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:40.462687    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:40.658101    4110 request.go:632] Waited for 195.359912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:11:40.658147    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:11:40.658152    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.658159    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.658164    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.660407    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:40.858455    4110 request.go:632] Waited for 197.559018ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:40.858564    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:40.858583    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.858595    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.858601    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.861876    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:40.862327    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:40.862336    4110 pod_ready.go:82] duration metric: took 399.635382ms for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:40.862343    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:41.057949    4110 request.go:632] Waited for 195.512959ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:11:41.058021    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:11:41.058032    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.058044    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.058051    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.061708    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:41.257802    4110 request.go:632] Waited for 195.475163ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:41.257884    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:41.257895    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.257906    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.257913    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.261190    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:41.261502    4110 pod_ready.go:98] node "ha-857000-m02" hosting pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:41.261513    4110 pod_ready.go:82] duration metric: took 399.156939ms for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	E0917 02:11:41.261527    4110 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-857000-m02" hosting pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:41.261532    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:41.458981    4110 request.go:632] Waited for 197.407496ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:11:41.459061    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:11:41.459070    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.459078    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.459084    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.461880    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:41.657846    4110 request.go:632] Waited for 195.542216ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:41.657906    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:41.657913    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.657921    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.657934    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.660204    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:41.660601    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:41.660610    4110 pod_ready.go:82] duration metric: took 399.066544ms for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:41.660617    4110 pod_ready.go:39] duration metric: took 5.403454072s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:11:41.660636    4110 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:11:41.660697    4110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:11:41.672821    4110 api_server.go:72] duration metric: took 13.928795458s to wait for apiserver process to appear ...
	I0917 02:11:41.672831    4110 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:11:41.672845    4110 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0917 02:11:41.683603    4110 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0917 02:11:41.683654    4110 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0917 02:11:41.683660    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.683666    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.683670    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.684276    4110 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 02:11:41.684340    4110 api_server.go:141] control plane version: v1.31.1
	I0917 02:11:41.684350    4110 api_server.go:131] duration metric: took 11.515194ms to wait for apiserver health ...
	I0917 02:11:41.684356    4110 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 02:11:41.857675    4110 request.go:632] Waited for 173.274042ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:41.857793    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:41.857803    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.857823    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.857833    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.863157    4110 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 02:11:41.868330    4110 system_pods.go:59] 26 kube-system pods found
	I0917 02:11:41.868348    4110 system_pods.go:61] "coredns-7c65d6cfc9-fg65r" [1690cf49-3cd5-45ba-bcff-6c2947fb1bef] Running
	I0917 02:11:41.868352    4110 system_pods.go:61] "coredns-7c65d6cfc9-nl5j5" [dad0f1c6-0feb-4024-b0ee-95776c68bae8] Running
	I0917 02:11:41.868360    4110 system_pods.go:61] "etcd-ha-857000" [e9da37c5-ad0f-47e0-b65f-361d2cb9f204] Running
	I0917 02:11:41.868366    4110 system_pods.go:61] "etcd-ha-857000-m02" [f540f85a-9556-44c0-a560-188721a58bd5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 02:11:41.868371    4110 system_pods.go:61] "etcd-ha-857000-m03" [46cb6c7e-73e2-49b0-986f-c63a23ffa29d] Running
	I0917 02:11:41.868377    4110 system_pods.go:61] "kindnet-4jk9v" [24a018c6-9cbb-4d17-a295-8fef456534a0] Running
	I0917 02:11:41.868392    4110 system_pods.go:61] "kindnet-7pf7v" [eecd1421-3a2f-4e48-b2b2-abcbef7869e7] Running
	I0917 02:11:41.868398    4110 system_pods.go:61] "kindnet-vc6z5" [a92e41cb-74d1-4bd4-bffd-b807c859efd3] Running
	I0917 02:11:41.868402    4110 system_pods.go:61] "kindnet-vh2h2" [00d83b6a-e87e-4c36-b073-36e67c76d67d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 02:11:41.868406    4110 system_pods.go:61] "kube-apiserver-ha-857000" [b5451c9b-3be2-4454-bed6-fdc48031180e] Running
	I0917 02:11:41.868424    4110 system_pods.go:61] "kube-apiserver-ha-857000-m02" [d043a0e6-6ab2-47b6-bb82-ceff496f8336] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 02:11:41.868430    4110 system_pods.go:61] "kube-apiserver-ha-857000-m03" [b3d91c89-8830-4dd9-8c20-4d6f821a3d88] Running
	I0917 02:11:41.868434    4110 system_pods.go:61] "kube-controller-manager-ha-857000" [f4b0f56c-8d7d-4567-a4eb-088805c36c54] Running
	I0917 02:11:41.868438    4110 system_pods.go:61] "kube-controller-manager-ha-857000-m02" [16670a48-0dc2-4665-9b1d-5fc25337ec58] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 02:11:41.868442    4110 system_pods.go:61] "kube-controller-manager-ha-857000-m03" [754fdff4-0e27-4f32-ab12-3a6695924396] Running
	I0917 02:11:41.868445    4110 system_pods.go:61] "kube-proxy-528ht" [18b29dac-e4bf-4f26-988f-b1ba4019f9bc] Running
	I0917 02:11:41.868448    4110 system_pods.go:61] "kube-proxy-g9wxm" [a5b974f1-ed28-4f42-8c86-6fed0bf32317] Running
	I0917 02:11:41.868450    4110 system_pods.go:61] "kube-proxy-vskbj" [7a396757-8954-48d2-b708-dcdfbab21dc7] Running
	I0917 02:11:41.868454    4110 system_pods.go:61] "kube-proxy-zrqvr" [bb98af15-e15a-4904-b68d-2de3c6e5864c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 02:11:41.868456    4110 system_pods.go:61] "kube-scheduler-ha-857000" [fdce329c-0340-4e0a-8bc1-295e4970708b] Running
	I0917 02:11:41.868468    4110 system_pods.go:61] "kube-scheduler-ha-857000-m02" [274256f9-cd98-4dd3-befc-35443dcca033] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 02:11:41.868473    4110 system_pods.go:61] "kube-scheduler-ha-857000-m03" [74d4633a-1c51-478d-9c68-d05c964089d9] Running
	I0917 02:11:41.868484    4110 system_pods.go:61] "kube-vip-ha-857000" [84b805d8-9a8f-4c6f-b18f-76c98ca4776c] Running
	I0917 02:11:41.868488    4110 system_pods.go:61] "kube-vip-ha-857000-m02" [b194d3e4-3b4d-4075-a6b1-f05d219393a0] Running
	I0917 02:11:41.868490    4110 system_pods.go:61] "kube-vip-ha-857000-m03" [29be8efa-f99f-460a-80bd-ccc25f608e48] Running
	I0917 02:11:41.868493    4110 system_pods.go:61] "storage-provisioner" [d81e7b55-a14e-4dc7-9193-ebe6914cdacf] Running
	I0917 02:11:41.868498    4110 system_pods.go:74] duration metric: took 184.134673ms to wait for pod list to return data ...
	I0917 02:11:41.868509    4110 default_sa.go:34] waiting for default service account to be created ...
	I0917 02:11:42.057457    4110 request.go:632] Waited for 188.887232ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:11:42.057501    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:11:42.057507    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:42.057512    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:42.057516    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:42.060122    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:42.060299    4110 default_sa.go:45] found service account: "default"
	I0917 02:11:42.060314    4110 default_sa.go:55] duration metric: took 191.792113ms for default service account to be created ...
	I0917 02:11:42.060320    4110 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 02:11:42.257458    4110 request.go:632] Waited for 197.098839ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:42.257490    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:42.257495    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:42.257501    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:42.257506    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:42.261392    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:42.267316    4110 system_pods.go:86] 26 kube-system pods found
	I0917 02:11:42.267336    4110 system_pods.go:89] "coredns-7c65d6cfc9-fg65r" [1690cf49-3cd5-45ba-bcff-6c2947fb1bef] Running
	I0917 02:11:42.267340    4110 system_pods.go:89] "coredns-7c65d6cfc9-nl5j5" [dad0f1c6-0feb-4024-b0ee-95776c68bae8] Running
	I0917 02:11:42.267343    4110 system_pods.go:89] "etcd-ha-857000" [e9da37c5-ad0f-47e0-b65f-361d2cb9f204] Running
	I0917 02:11:42.267356    4110 system_pods.go:89] "etcd-ha-857000-m02" [f540f85a-9556-44c0-a560-188721a58bd5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 02:11:42.267362    4110 system_pods.go:89] "etcd-ha-857000-m03" [46cb6c7e-73e2-49b0-986f-c63a23ffa29d] Running
	I0917 02:11:42.267366    4110 system_pods.go:89] "kindnet-4jk9v" [24a018c6-9cbb-4d17-a295-8fef456534a0] Running
	I0917 02:11:42.267369    4110 system_pods.go:89] "kindnet-7pf7v" [eecd1421-3a2f-4e48-b2b2-abcbef7869e7] Running
	I0917 02:11:42.267372    4110 system_pods.go:89] "kindnet-vc6z5" [a92e41cb-74d1-4bd4-bffd-b807c859efd3] Running
	I0917 02:11:42.267377    4110 system_pods.go:89] "kindnet-vh2h2" [00d83b6a-e87e-4c36-b073-36e67c76d67d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 02:11:42.267380    4110 system_pods.go:89] "kube-apiserver-ha-857000" [b5451c9b-3be2-4454-bed6-fdc48031180e] Running
	I0917 02:11:42.267385    4110 system_pods.go:89] "kube-apiserver-ha-857000-m02" [d043a0e6-6ab2-47b6-bb82-ceff496f8336] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 02:11:42.267389    4110 system_pods.go:89] "kube-apiserver-ha-857000-m03" [b3d91c89-8830-4dd9-8c20-4d6f821a3d88] Running
	I0917 02:11:42.267392    4110 system_pods.go:89] "kube-controller-manager-ha-857000" [f4b0f56c-8d7d-4567-a4eb-088805c36c54] Running
	I0917 02:11:42.267398    4110 system_pods.go:89] "kube-controller-manager-ha-857000-m02" [16670a48-0dc2-4665-9b1d-5fc25337ec58] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 02:11:42.267402    4110 system_pods.go:89] "kube-controller-manager-ha-857000-m03" [754fdff4-0e27-4f32-ab12-3a6695924396] Running
	I0917 02:11:42.267405    4110 system_pods.go:89] "kube-proxy-528ht" [18b29dac-e4bf-4f26-988f-b1ba4019f9bc] Running
	I0917 02:11:42.267408    4110 system_pods.go:89] "kube-proxy-g9wxm" [a5b974f1-ed28-4f42-8c86-6fed0bf32317] Running
	I0917 02:11:42.267411    4110 system_pods.go:89] "kube-proxy-vskbj" [7a396757-8954-48d2-b708-dcdfbab21dc7] Running
	I0917 02:11:42.267415    4110 system_pods.go:89] "kube-proxy-zrqvr" [bb98af15-e15a-4904-b68d-2de3c6e5864c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 02:11:42.267419    4110 system_pods.go:89] "kube-scheduler-ha-857000" [fdce329c-0340-4e0a-8bc1-295e4970708b] Running
	I0917 02:11:42.267423    4110 system_pods.go:89] "kube-scheduler-ha-857000-m02" [274256f9-cd98-4dd3-befc-35443dcca033] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 02:11:42.267427    4110 system_pods.go:89] "kube-scheduler-ha-857000-m03" [74d4633a-1c51-478d-9c68-d05c964089d9] Running
	I0917 02:11:42.267436    4110 system_pods.go:89] "kube-vip-ha-857000" [84b805d8-9a8f-4c6f-b18f-76c98ca4776c] Running
	I0917 02:11:42.267438    4110 system_pods.go:89] "kube-vip-ha-857000-m02" [b194d3e4-3b4d-4075-a6b1-f05d219393a0] Running
	I0917 02:11:42.267441    4110 system_pods.go:89] "kube-vip-ha-857000-m03" [29be8efa-f99f-460a-80bd-ccc25f608e48] Running
	I0917 02:11:42.267444    4110 system_pods.go:89] "storage-provisioner" [d81e7b55-a14e-4dc7-9193-ebe6914cdacf] Running
	I0917 02:11:42.267448    4110 system_pods.go:126] duration metric: took 207.120728ms to wait for k8s-apps to be running ...
	I0917 02:11:42.267459    4110 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 02:11:42.267525    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:11:42.280323    4110 system_svc.go:56] duration metric: took 12.855514ms WaitForService to wait for kubelet
	I0917 02:11:42.280342    4110 kubeadm.go:582] duration metric: took 14.536306226s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:11:42.280356    4110 node_conditions.go:102] verifying NodePressure condition ...
	I0917 02:11:42.458901    4110 request.go:632] Waited for 178.497588ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0917 02:11:42.458965    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0917 02:11:42.458970    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:42.458975    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:42.458980    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:42.461607    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:42.462345    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:11:42.462358    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:11:42.462367    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:11:42.462370    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:11:42.462374    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:11:42.462377    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:11:42.462380    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:11:42.462383    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:11:42.462386    4110 node_conditions.go:105] duration metric: took 182.022805ms to run NodePressure ...
	I0917 02:11:42.462394    4110 start.go:241] waiting for startup goroutines ...
	I0917 02:11:42.462412    4110 start.go:255] writing updated cluster config ...
	I0917 02:11:42.484336    4110 out.go:201] 
	I0917 02:11:42.505774    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:42.505869    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:11:42.527331    4110 out.go:177] * Starting "ha-857000-m03" control-plane node in "ha-857000" cluster
	I0917 02:11:42.569515    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:11:42.569551    4110 cache.go:56] Caching tarball of preloaded images
	I0917 02:11:42.569751    4110 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:11:42.569769    4110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:11:42.569891    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:11:42.570622    4110 start.go:360] acquireMachinesLock for ha-857000-m03: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:11:42.570733    4110 start.go:364] duration metric: took 89.66µs to acquireMachinesLock for "ha-857000-m03"
	I0917 02:11:42.570758    4110 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:11:42.570766    4110 fix.go:54] fixHost starting: m03
	I0917 02:11:42.571203    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:11:42.571238    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:11:42.581037    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52144
	I0917 02:11:42.581469    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:11:42.581811    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:11:42.581822    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:11:42.582051    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:11:42.582209    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:42.582294    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetState
	I0917 02:11:42.582428    4110 main.go:141] libmachine: (ha-857000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:11:42.582545    4110 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid from json: 3442
	I0917 02:11:42.583498    4110 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid 3442 missing from process table
	I0917 02:11:42.583556    4110 fix.go:112] recreateIfNeeded on ha-857000-m03: state=Stopped err=<nil>
	I0917 02:11:42.583568    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	W0917 02:11:42.583655    4110 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:11:42.604438    4110 out.go:177] * Restarting existing hyperkit VM for "ha-857000-m03" ...
	I0917 02:11:42.678579    4110 main.go:141] libmachine: (ha-857000-m03) Calling .Start
	I0917 02:11:42.678864    4110 main.go:141] libmachine: (ha-857000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:11:42.678945    4110 main.go:141] libmachine: (ha-857000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/hyperkit.pid
	I0917 02:11:42.680796    4110 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid 3442 missing from process table
	I0917 02:11:42.680811    4110 main.go:141] libmachine: (ha-857000-m03) DBG | pid 3442 is in state "Stopped"
	I0917 02:11:42.680856    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/hyperkit.pid...
	I0917 02:11:42.681059    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Using UUID 3d8fdba9-dbf7-47ea-a80b-a24a99cad96e
	I0917 02:11:42.708058    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Generated MAC 16:4d:1d:5e:91:c8
	I0917 02:11:42.708080    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:11:42.708229    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3d8fdba9-dbf7-47ea-a80b-a24a99cad96e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000398c90)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:11:42.708256    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3d8fdba9-dbf7-47ea-a80b-a24a99cad96e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000398c90)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:11:42.708317    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3d8fdba9-dbf7-47ea-a80b-a24a99cad96e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/ha-857000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machine
s/ha-857000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:11:42.708369    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3d8fdba9-dbf7-47ea-a80b-a24a99cad96e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/ha-857000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:11:42.708386    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:11:42.710198    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: Pid is 4146
	I0917 02:11:42.710768    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Attempt 0
	I0917 02:11:42.710795    4110 main.go:141] libmachine: (ha-857000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:11:42.710847    4110 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid from json: 4146
	I0917 02:11:42.712907    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Searching for 16:4d:1d:5e:91:c8 in /var/db/dhcpd_leases ...
	I0917 02:11:42.712978    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:11:42.713009    4110 main.go:141] libmachine: (ha-857000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:11:42.713035    4110 main.go:141] libmachine: (ha-857000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:11:42.713060    4110 main.go:141] libmachine: (ha-857000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94661}
	I0917 02:11:42.713079    4110 main.go:141] libmachine: (ha-857000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9724}
	I0917 02:11:42.713098    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Found match: 16:4d:1d:5e:91:c8
	I0917 02:11:42.713110    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetConfigRaw
	I0917 02:11:42.713129    4110 main.go:141] libmachine: (ha-857000-m03) DBG | IP: 192.169.0.7
	I0917 02:11:42.713812    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetIP
	I0917 02:11:42.714067    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:11:42.714634    4110 machine.go:93] provisionDockerMachine start ...
	I0917 02:11:42.714648    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:42.714804    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:42.714912    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:42.715030    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:42.715172    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:42.715275    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:42.715462    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:42.715719    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:42.715729    4110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:11:42.719370    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:11:42.729567    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:11:42.730522    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:11:42.730552    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:11:42.730564    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:11:42.730573    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:11:43.130217    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:11:43.130237    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:11:43.246057    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:11:43.246080    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:11:43.246089    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:11:43.246096    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:11:43.246900    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:11:43.246909    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:11:48.954281    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:48 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:11:48.954379    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:48 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:11:48.954390    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:48 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:11:48.977816    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:48 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:11:53.786367    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:11:53.786383    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetMachineName
	I0917 02:11:53.786507    4110 buildroot.go:166] provisioning hostname "ha-857000-m03"
	I0917 02:11:53.786518    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetMachineName
	I0917 02:11:53.786619    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:53.786716    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:53.786814    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:53.786901    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:53.786991    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:53.787125    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:53.787256    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:53.787264    4110 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000-m03 && echo "ha-857000-m03" | sudo tee /etc/hostname
	I0917 02:11:53.860809    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000-m03
	
	I0917 02:11:53.860831    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:53.860995    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:53.861092    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:53.861199    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:53.861302    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:53.861448    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:53.861610    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:53.861621    4110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:11:53.932575    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:11:53.932592    4110 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:11:53.932604    4110 buildroot.go:174] setting up certificates
	I0917 02:11:53.932611    4110 provision.go:84] configureAuth start
	I0917 02:11:53.932618    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetMachineName
	I0917 02:11:53.932757    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetIP
	I0917 02:11:53.932853    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:53.932933    4110 provision.go:143] copyHostCerts
	I0917 02:11:53.932962    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:11:53.933012    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:11:53.933018    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:11:53.933153    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:11:53.933356    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:11:53.933385    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:11:53.933389    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:11:53.933461    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:11:53.933602    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:11:53.933640    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:11:53.933645    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:11:53.933711    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:11:53.933855    4110 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000-m03 san=[127.0.0.1 192.169.0.7 ha-857000-m03 localhost minikube]
	I0917 02:11:54.077333    4110 provision.go:177] copyRemoteCerts
	I0917 02:11:54.077392    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:11:54.077407    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:54.077544    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:54.077643    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.077738    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:54.077820    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	I0917 02:11:54.116797    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:11:54.116876    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 02:11:54.136202    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:11:54.136278    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:11:54.156340    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:11:54.156419    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:11:54.175630    4110 provision.go:87] duration metric: took 243.006586ms to configureAuth
	I0917 02:11:54.175645    4110 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:11:54.175825    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:54.175845    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:54.175978    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:54.176072    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:54.176183    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.176286    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.176390    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:54.176544    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:54.176682    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:54.176690    4110 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:11:54.238979    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:11:54.238993    4110 buildroot.go:70] root file system type: tmpfs
	I0917 02:11:54.239102    4110 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:11:54.239114    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:54.239249    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:54.239359    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.239453    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.239547    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:54.239702    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:54.239844    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:54.239889    4110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:11:54.314599    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:11:54.314621    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:54.314767    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:54.314854    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.314947    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.315024    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:54.315150    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:54.315292    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:54.315304    4110 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:11:55.935197    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:11:55.935211    4110 machine.go:96] duration metric: took 13.220338614s to provisionDockerMachine
	I0917 02:11:55.935219    4110 start.go:293] postStartSetup for "ha-857000-m03" (driver="hyperkit")
	I0917 02:11:55.935226    4110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:11:55.935240    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:55.935436    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:11:55.935456    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:55.935555    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:55.935640    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:55.935720    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:55.935796    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	I0917 02:11:55.975655    4110 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:11:55.982326    4110 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:11:55.982340    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:11:55.982439    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:11:55.982583    4110 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:11:55.982589    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:11:55.982752    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:11:55.995355    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:11:56.016063    4110 start.go:296] duration metric: took 80.833975ms for postStartSetup
	I0917 02:11:56.016085    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.016278    4110 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:11:56.016292    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:56.016390    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:56.016474    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.016549    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:56.016621    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	I0917 02:11:56.056575    4110 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:11:56.056644    4110 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:11:56.090435    4110 fix.go:56] duration metric: took 13.519431085s for fixHost
	I0917 02:11:56.090460    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:56.090600    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:56.090686    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.090776    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.090860    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:56.090993    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:56.091136    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:56.091142    4110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:11:56.155623    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726564316.081021180
	
	I0917 02:11:56.155639    4110 fix.go:216] guest clock: 1726564316.081021180
	I0917 02:11:56.155645    4110 fix.go:229] Guest: 2024-09-17 02:11:56.08102118 -0700 PDT Remote: 2024-09-17 02:11:56.09045 -0700 PDT m=+89.019475712 (delta=-9.42882ms)
	I0917 02:11:56.155656    4110 fix.go:200] guest clock delta is within tolerance: -9.42882ms
	I0917 02:11:56.155660    4110 start.go:83] releasing machines lock for "ha-857000-m03", held for 13.584681554s
	I0917 02:11:56.155677    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.155816    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetIP
	I0917 02:11:56.177120    4110 out.go:177] * Found network options:
	I0917 02:11:56.197056    4110 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0917 02:11:56.217835    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:11:56.217862    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:56.217881    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.218511    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.218685    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.218846    4110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0917 02:11:56.218876    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:56.218892    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	W0917 02:11:56.218898    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:56.219005    4110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:11:56.219024    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:56.219078    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:56.219246    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.219309    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:56.219439    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:56.219492    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.219585    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	I0917 02:11:56.219614    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:56.219751    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	W0917 02:11:56.256644    4110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:11:56.256720    4110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:11:56.309886    4110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:11:56.309904    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:11:56.309980    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:11:56.326165    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:11:56.334717    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:11:56.343026    4110 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:11:56.343079    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:11:56.351351    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:11:56.359978    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:11:56.368445    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:11:56.376813    4110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:11:56.385309    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:11:56.393895    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:11:56.402441    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:11:56.410891    4110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:11:56.418564    4110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:11:56.426298    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:56.529182    4110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:11:56.548629    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:11:56.548711    4110 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:11:56.564564    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:11:56.575668    4110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:11:56.592483    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:11:56.605747    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:11:56.616286    4110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:11:56.636099    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:11:56.646661    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:11:56.662025    4110 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:11:56.665163    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:11:56.672775    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:11:56.686783    4110 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:11:56.787618    4110 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:11:56.902014    4110 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:11:56.902043    4110 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:11:56.916683    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:57.010321    4110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:11:59.292351    4110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.28197073s)
	I0917 02:11:59.292423    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:11:59.302881    4110 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:11:59.315909    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:11:59.326097    4110 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:11:59.423622    4110 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:11:59.534194    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:59.650222    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:11:59.664197    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:11:59.675195    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:59.768785    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:11:59.834137    4110 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:11:59.834234    4110 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:11:59.838654    4110 start.go:563] Will wait 60s for crictl version
	I0917 02:11:59.838726    4110 ssh_runner.go:195] Run: which crictl
	I0917 02:11:59.844060    4110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:11:59.874850    4110 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:11:59.874944    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:11:59.893142    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:11:59.934010    4110 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:11:59.974908    4110 out.go:177]   - env NO_PROXY=192.169.0.5
	I0917 02:11:59.996010    4110 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0917 02:12:00.016678    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetIP
	I0917 02:12:00.016979    4110 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:12:00.020450    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:12:00.029942    4110 mustload.go:65] Loading cluster: ha-857000
	I0917 02:12:00.030121    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:00.030345    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:00.030368    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:00.039149    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52286
	I0917 02:12:00.039489    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:00.039838    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:00.039856    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:00.040084    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:00.040206    4110 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:12:00.040304    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:00.040367    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 4124
	I0917 02:12:00.041347    4110 host.go:66] Checking if "ha-857000" exists ...
	I0917 02:12:00.041604    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:00.041629    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:00.050248    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52288
	I0917 02:12:00.050590    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:00.050943    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:00.050963    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:00.051142    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:00.051249    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:12:00.051358    4110 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.7
	I0917 02:12:00.051364    4110 certs.go:194] generating shared ca certs ...
	I0917 02:12:00.051373    4110 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:12:00.051518    4110 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:12:00.051569    4110 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:12:00.051578    4110 certs.go:256] generating profile certs ...
	I0917 02:12:00.051672    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key
	I0917 02:12:00.051762    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.daf177bc
	I0917 02:12:00.051812    4110 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key
	I0917 02:12:00.051819    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:12:00.051841    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:12:00.051859    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:12:00.051878    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:12:00.051895    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 02:12:00.051919    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 02:12:00.051943    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 02:12:00.051962    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 02:12:00.052037    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:12:00.052085    4110 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:12:00.052093    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:12:00.052128    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:12:00.052160    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:12:00.052188    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:12:00.052263    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:12:00.052296    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:12:00.052317    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:00.052334    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:12:00.052362    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:12:00.052450    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:12:00.052535    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:12:00.052624    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:12:00.052722    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:12:00.080096    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 02:12:00.083244    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 02:12:00.090969    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 02:12:00.094112    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 02:12:00.101834    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 02:12:00.104986    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 02:12:00.113430    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 02:12:00.116712    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 02:12:00.124546    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 02:12:00.127709    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 02:12:00.135587    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 02:12:00.138750    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 02:12:00.147884    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:12:00.168533    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:12:00.188900    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:12:00.208781    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:12:00.229275    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 02:12:00.248994    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 02:12:00.269569    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:12:00.289646    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 02:12:00.309509    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:12:00.329488    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:12:00.349487    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:12:00.369414    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 02:12:00.383327    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 02:12:00.396803    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 02:12:00.410693    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 02:12:00.424533    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 02:12:00.438144    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 02:12:00.451710    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 02:12:00.465698    4110 ssh_runner.go:195] Run: openssl version
	I0917 02:12:00.470190    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:12:00.478670    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:12:00.482005    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:12:00.482051    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:12:00.486183    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:12:00.494427    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:12:00.503098    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:00.506593    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:00.506643    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:00.510950    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:12:00.519387    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:12:00.527796    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:12:00.531174    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:12:00.531231    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:12:00.535528    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:12:00.543734    4110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:12:00.547058    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:12:00.551336    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:12:00.555666    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:12:00.560095    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:12:00.564671    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:12:00.568907    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:12:00.573116    4110 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.1 docker true true} ...
	I0917 02:12:00.573181    4110 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:12:00.573213    4110 kube-vip.go:115] generating kube-vip config ...
	I0917 02:12:00.573252    4110 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 02:12:00.585709    4110 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 02:12:00.585750    4110 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 02:12:00.585815    4110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:12:00.593621    4110 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:12:00.593672    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 02:12:00.600967    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 02:12:00.614925    4110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:12:00.628761    4110 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 02:12:00.642265    4110 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:12:00.645102    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:12:00.654336    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:00.752482    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:12:00.767122    4110 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:12:00.767316    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:00.788252    4110 out.go:177] * Verifying Kubernetes components...
	I0917 02:12:00.808843    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:00.927434    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:12:00.944321    4110 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:12:00.944565    4110 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x673a720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 02:12:00.944614    4110 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0917 02:12:00.944789    4110 node_ready.go:35] waiting up to 6m0s for node "ha-857000-m03" to be "Ready" ...
	I0917 02:12:00.944851    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:00.944858    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.944867    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.944872    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.946764    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:00.947061    4110 node_ready.go:49] node "ha-857000-m03" has status "Ready":"True"
	I0917 02:12:00.947072    4110 node_ready.go:38] duration metric: took 2.273862ms for node "ha-857000-m03" to be "Ready" ...
	I0917 02:12:00.947078    4110 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:12:00.947127    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:00.947133    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.947139    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.947143    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.950970    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:00.956449    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.956504    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:00.956511    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.956518    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.956526    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.959279    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.959653    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:00.959660    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.959666    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.959669    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.961657    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:00.962160    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:00.962170    4110 pod_ready.go:82] duration metric: took 5.706294ms for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.962176    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.962215    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nl5j5
	I0917 02:12:00.962221    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.962226    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.962230    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.966635    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:00.967113    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:00.967122    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.967128    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.967131    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.969231    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.969585    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:00.969594    4110 pod_ready.go:82] duration metric: took 7.413149ms for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.969601    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.969645    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000
	I0917 02:12:00.969650    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.969655    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.969659    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.971799    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.972247    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:00.972254    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.972264    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.972267    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.974411    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.974879    4110 pod_ready.go:93] pod "etcd-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:00.974888    4110 pod_ready.go:82] duration metric: took 5.282457ms for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.974895    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.974931    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m02
	I0917 02:12:00.974936    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.974941    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.974945    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.977288    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.977952    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:00.977959    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.977964    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.977966    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.980610    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.981051    4110 pod_ready.go:93] pod "etcd-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:00.981061    4110 pod_ready.go:82] duration metric: took 6.161283ms for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.981068    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.146340    4110 request.go:632] Waited for 165.222252ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m03
	I0917 02:12:01.146408    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m03
	I0917 02:12:01.146414    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.146420    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.146423    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.148663    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:01.345119    4110 request.go:632] Waited for 196.038973ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:01.345177    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:01.345186    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.345198    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.345210    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.348611    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:01.349143    4110 pod_ready.go:93] pod "etcd-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:01.349154    4110 pod_ready.go:82] duration metric: took 368.067559ms for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.349166    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.545007    4110 request.go:632] Waited for 195.782486ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:12:01.545050    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:12:01.545055    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.545061    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.545066    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.547602    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:01.745603    4110 request.go:632] Waited for 197.630153ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:01.745661    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:01.745667    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.745673    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.745676    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.748299    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:01.748902    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:01.748919    4110 pod_ready.go:82] duration metric: took 399.734114ms for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.748926    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.945883    4110 request.go:632] Waited for 196.866004ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:12:01.945945    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:12:01.945954    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.945964    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.945969    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.951958    4110 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 02:12:02.145413    4110 request.go:632] Waited for 192.798684ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:02.145468    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:02.145478    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.145511    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.145520    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.148357    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:02.149190    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:02.149203    4110 pod_ready.go:82] duration metric: took 400.265258ms for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:02.149211    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:02.345683    4110 request.go:632] Waited for 196.426528ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:02.345728    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:02.345736    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.345744    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.345751    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.348508    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:02.544925    4110 request.go:632] Waited for 196.020856ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:02.544994    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:02.545000    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.545006    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.545009    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.547483    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:02.744993    4110 request.go:632] Waited for 95.563815ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:02.745048    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:02.745054    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.745061    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.745065    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.747122    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:02.945441    4110 request.go:632] Waited for 197.559126ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:02.945475    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:02.945480    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.945486    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.945491    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.948036    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:03.150936    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:03.150968    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:03.150975    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:03.150980    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:03.153272    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:03.346424    4110 request.go:632] Waited for 192.442992ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:03.346514    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:03.346521    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:03.346528    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:03.346533    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:03.350998    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:03.649774    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:03.649809    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:03.649818    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:03.649823    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:03.652931    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:03.744972    4110 request.go:632] Waited for 90.967061ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:03.745023    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:03.745029    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:03.745034    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:03.745039    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:03.747431    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:04.149979    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:04.150024    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:04.150033    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:04.150037    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:04.153328    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:04.153812    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:04.153822    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:04.153828    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:04.153832    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:04.156074    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:04.156716    4110 pod_ready.go:103] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:04.650904    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:04.650924    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:04.650931    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:04.650946    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:04.653820    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:04.654378    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:04.654386    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:04.654393    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:04.654396    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:04.656654    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:05.151431    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:05.151485    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:05.151499    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:05.151506    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:05.154809    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:05.155323    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:05.155331    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:05.155337    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:05.155340    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:05.156965    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:05.650343    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:05.650367    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:05.650413    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:05.650421    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:05.653876    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:05.654508    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:05.654516    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:05.654522    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:05.654525    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:05.656260    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:06.149952    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:06.149982    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:06.149989    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:06.149994    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:06.152142    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:06.152594    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:06.152602    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:06.152608    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:06.152611    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:06.154378    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:06.650007    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:06.650040    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:06.650049    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:06.650053    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:06.652517    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:06.653131    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:06.653138    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:06.653144    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:06.653148    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:06.655153    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:06.655511    4110 pod_ready.go:103] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:07.150612    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:07.150642    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:07.150678    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:07.150687    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:07.153805    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:07.154498    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:07.154508    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:07.154516    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:07.154521    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:07.156264    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:07.650356    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:07.650381    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:07.650392    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:07.650401    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:07.653535    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:07.653958    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:07.653966    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:07.653972    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:07.653975    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:07.656337    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:08.150386    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:08.150440    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:08.150452    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:08.150460    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:08.153584    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:08.155108    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:08.155123    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:08.155132    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:08.155143    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:08.157038    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:08.650349    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:08.650377    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:08.650389    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:08.650398    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:08.654034    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:08.654828    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:08.654836    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:08.654843    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:08.654846    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:08.656625    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:08.656928    4110 pod_ready.go:103] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:09.151423    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:09.151447    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:09.151459    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:09.151464    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:09.154460    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:09.154947    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:09.154956    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:09.154961    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:09.154966    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:09.156555    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:09.650477    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:09.650503    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:09.650554    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:09.650568    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:09.653583    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:09.653960    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:09.653967    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:09.653973    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:09.653983    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:09.655828    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:10.149696    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:10.149720    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:10.149732    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:10.149739    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:10.153151    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:10.153716    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:10.153726    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:10.153734    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:10.153739    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:10.155758    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:10.649780    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:10.649830    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:10.649844    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:10.649854    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:10.653210    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:10.653938    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:10.653945    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:10.653951    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:10.653956    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:10.655718    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:11.149497    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:11.149512    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:11.149525    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:11.149530    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:11.151647    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:11.152174    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:11.152181    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:11.152187    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:11.152189    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:11.154098    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:11.154423    4110 pod_ready.go:103] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:11.650969    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:11.650998    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:11.651032    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:11.651039    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:11.654171    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:11.654962    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:11.654969    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:11.654975    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:11.654979    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:11.656692    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.150871    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:12.150884    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.150890    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.150893    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.153079    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:12.153733    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:12.153741    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.153747    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.153751    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.155608    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.650611    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:12.650636    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.650674    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.650684    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.654409    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:12.654934    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:12.654941    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.654951    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.654954    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.656676    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.657136    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:12.657145    4110 pod_ready.go:82] duration metric: took 10.507747852s for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.657152    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.657184    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:12:12.657189    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.657194    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.657198    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.658893    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.659304    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:12.659312    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.659317    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.659321    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.660920    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.661222    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:12.661230    4110 pod_ready.go:82] duration metric: took 4.073163ms for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.661237    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.661269    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:12:12.661274    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.661279    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.661282    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.662821    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.663178    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:12.663186    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.663192    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.663195    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.664635    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.665084    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:12.665092    4110 pod_ready.go:82] duration metric: took 3.849688ms for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.665098    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.665131    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:12.665136    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.665142    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.665157    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.666924    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.667551    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:12.667558    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.667564    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.667566    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.669116    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:13.165275    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:13.165342    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:13.165359    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:13.165367    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:13.168538    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:13.169042    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:13.169049    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:13.169054    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:13.169059    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:13.170903    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:13.665896    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:13.665914    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:13.665923    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:13.665930    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:13.668510    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:13.669059    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:13.669066    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:13.669071    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:13.669074    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:13.670842    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:14.165888    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:14.165910    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:14.165935    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:14.165941    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:14.168473    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:14.169111    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:14.169118    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:14.169124    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:14.169137    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:14.170994    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:14.667072    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:14.667128    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:14.667140    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:14.667151    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:14.670650    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:14.671210    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:14.671217    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:14.671222    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:14.671226    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:14.672859    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:14.673218    4110 pod_ready.go:103] pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:15.165335    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:15.165362    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:15.165375    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:15.165382    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:15.169212    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:15.169615    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:15.169623    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:15.169629    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:15.169633    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:15.171395    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:15.665422    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:15.665483    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:15.665498    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:15.665505    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:15.667889    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:15.668348    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:15.668356    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:15.668364    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:15.668369    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:15.670115    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.166085    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:16.166134    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.166147    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.166156    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.168879    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:16.169423    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:16.169430    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.169439    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.169442    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.171016    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.666749    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:16.666767    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.666797    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.666802    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.669480    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:16.669826    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:16.669832    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.669838    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.669842    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.671504    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.671930    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:16.671939    4110 pod_ready.go:82] duration metric: took 4.006767511s for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.671955    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.671990    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:12:16.671995    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.672000    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.672005    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.673862    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.674451    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:12:16.674459    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.674464    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.674468    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.676355    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.676667    4110 pod_ready.go:93] pod "kube-proxy-528ht" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:16.676675    4110 pod_ready.go:82] duration metric: took 4.715112ms for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.676682    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.676724    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:12:16.676729    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.676734    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.676738    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.678611    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.678986    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:16.678993    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.678999    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.679003    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.680713    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.681034    4110 pod_ready.go:93] pod "kube-proxy-g9wxm" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:16.681043    4110 pod_ready.go:82] duration metric: took 4.356651ms for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.681050    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.681091    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:12:16.681097    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.681102    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.681106    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.682940    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.683445    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:16.683452    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.683458    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.683462    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.685017    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.685461    4110 pod_ready.go:93] pod "kube-proxy-vskbj" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:16.685470    4110 pod_ready.go:82] duration metric: took 4.414596ms for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.685478    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.851971    4110 request.go:632] Waited for 166.418009ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:12:16.852035    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:12:16.852064    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.852076    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.852084    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.855683    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:17.050985    4110 request.go:632] Waited for 194.718198ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:17.051089    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:17.051098    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.051110    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.051119    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.054384    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:17.054876    4110 pod_ready.go:93] pod "kube-proxy-zrqvr" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:17.054889    4110 pod_ready.go:82] duration metric: took 369.398412ms for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.054898    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.250755    4110 request.go:632] Waited for 195.811261ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:12:17.250805    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:12:17.250817    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.250830    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.250841    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.291380    4110 round_trippers.go:574] Response Status: 200 OK in 40 milliseconds
	I0917 02:12:17.450914    4110 request.go:632] Waited for 157.443488ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:17.450956    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:17.450990    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.450996    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.450999    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.455828    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:17.456276    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:17.456286    4110 pod_ready.go:82] duration metric: took 401.376038ms for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.456294    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.651418    4110 request.go:632] Waited for 195.082221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:12:17.651455    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:12:17.651461    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.651471    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.651495    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.668422    4110 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0917 02:12:17.850764    4110 request.go:632] Waited for 181.996065ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:17.850819    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:17.850825    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.850832    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.850836    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.857947    4110 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 02:12:17.858420    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:17.858431    4110 pod_ready.go:82] duration metric: took 402.124989ms for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.858439    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:18.051442    4110 request.go:632] Waited for 192.93696ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:12:18.051483    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:12:18.051491    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.051499    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.051512    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.054127    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:18.250926    4110 request.go:632] Waited for 196.199352ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:18.250961    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:18.250966    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.251003    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.251008    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.274920    4110 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0917 02:12:18.275585    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:18.275595    4110 pod_ready.go:82] duration metric: took 417.143356ms for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:18.275606    4110 pod_ready.go:39] duration metric: took 17.328217726s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:12:18.275618    4110 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:12:18.275688    4110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:12:18.289040    4110 api_server.go:72] duration metric: took 17.521587147s to wait for apiserver process to appear ...
	I0917 02:12:18.289060    4110 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:12:18.289072    4110 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0917 02:12:18.292824    4110 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0917 02:12:18.292862    4110 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0917 02:12:18.292866    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.292872    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.292879    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.294137    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:18.294247    4110 api_server.go:141] control plane version: v1.31.1
	I0917 02:12:18.294257    4110 api_server.go:131] duration metric: took 5.192363ms to wait for apiserver health ...
	I0917 02:12:18.294263    4110 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 02:12:18.451185    4110 request.go:632] Waited for 156.882548ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:18.451216    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:18.451222    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.451248    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.451254    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.490169    4110 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0917 02:12:18.505194    4110 system_pods.go:59] 26 kube-system pods found
	I0917 02:12:18.505219    4110 system_pods.go:61] "coredns-7c65d6cfc9-fg65r" [1690cf49-3cd5-45ba-bcff-6c2947fb1bef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 02:12:18.505226    4110 system_pods.go:61] "coredns-7c65d6cfc9-nl5j5" [dad0f1c6-0feb-4024-b0ee-95776c68bae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 02:12:18.505231    4110 system_pods.go:61] "etcd-ha-857000" [e9da37c5-ad0f-47e0-b65f-361d2cb9f204] Running
	I0917 02:12:18.505234    4110 system_pods.go:61] "etcd-ha-857000-m02" [f540f85a-9556-44c0-a560-188721a58bd5] Running
	I0917 02:12:18.505237    4110 system_pods.go:61] "etcd-ha-857000-m03" [46cb6c7e-73e2-49b0-986f-c63a23ffa29d] Running
	I0917 02:12:18.505240    4110 system_pods.go:61] "kindnet-4jk9v" [24a018c6-9cbb-4d17-a295-8fef456534a0] Running
	I0917 02:12:18.505244    4110 system_pods.go:61] "kindnet-7pf7v" [eecd1421-3a2f-4e48-b2b2-abcbef7869e7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 02:12:18.505247    4110 system_pods.go:61] "kindnet-vc6z5" [a92e41cb-74d1-4bd4-bffd-b807c859efd3] Running
	I0917 02:12:18.505250    4110 system_pods.go:61] "kindnet-vh2h2" [00d83b6a-e87e-4c36-b073-36e67c76d67d] Running
	I0917 02:12:18.505273    4110 system_pods.go:61] "kube-apiserver-ha-857000" [b5451c9b-3be2-4454-bed6-fdc48031180e] Running
	I0917 02:12:18.505282    4110 system_pods.go:61] "kube-apiserver-ha-857000-m02" [d043a0e6-6ab2-47b6-bb82-ceff496f8336] Running
	I0917 02:12:18.505290    4110 system_pods.go:61] "kube-apiserver-ha-857000-m03" [b3d91c89-8830-4dd9-8c20-4d6f821a3d88] Running
	I0917 02:12:18.505313    4110 system_pods.go:61] "kube-controller-manager-ha-857000" [f4b0f56c-8d7d-4567-a4eb-088805c36c54] Running
	I0917 02:12:18.505323    4110 system_pods.go:61] "kube-controller-manager-ha-857000-m02" [16670a48-0dc2-4665-9b1d-5fc25337ec58] Running
	I0917 02:12:18.505338    4110 system_pods.go:61] "kube-controller-manager-ha-857000-m03" [754fdff4-0e27-4f32-ab12-3a6695924396] Running
	I0917 02:12:18.505343    4110 system_pods.go:61] "kube-proxy-528ht" [18b29dac-e4bf-4f26-988f-b1ba4019f9bc] Running
	I0917 02:12:18.505351    4110 system_pods.go:61] "kube-proxy-g9wxm" [a5b974f1-ed28-4f42-8c86-6fed0bf32317] Running
	I0917 02:12:18.505361    4110 system_pods.go:61] "kube-proxy-vskbj" [7a396757-8954-48d2-b708-dcdfbab21dc7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 02:12:18.505367    4110 system_pods.go:61] "kube-proxy-zrqvr" [bb98af15-e15a-4904-b68d-2de3c6e5864c] Running
	I0917 02:12:18.505373    4110 system_pods.go:61] "kube-scheduler-ha-857000" [fdce329c-0340-4e0a-8bc1-295e4970708b] Running
	I0917 02:12:18.505378    4110 system_pods.go:61] "kube-scheduler-ha-857000-m02" [274256f9-cd98-4dd3-befc-35443dcca033] Running
	I0917 02:12:18.505384    4110 system_pods.go:61] "kube-scheduler-ha-857000-m03" [74d4633a-1c51-478d-9c68-d05c964089d9] Running
	I0917 02:12:18.505388    4110 system_pods.go:61] "kube-vip-ha-857000" [84b805d8-9a8f-4c6f-b18f-76c98ca4776c] Running
	I0917 02:12:18.505392    4110 system_pods.go:61] "kube-vip-ha-857000-m02" [b194d3e4-3b4d-4075-a6b1-f05d219393a0] Running
	I0917 02:12:18.505396    4110 system_pods.go:61] "kube-vip-ha-857000-m03" [29be8efa-f99f-460a-80bd-ccc25f608e48] Running
	I0917 02:12:18.505399    4110 system_pods.go:61] "storage-provisioner" [d81e7b55-a14e-4dc7-9193-ebe6914cdacf] Running
	I0917 02:12:18.505406    4110 system_pods.go:74] duration metric: took 211.134036ms to wait for pod list to return data ...
	I0917 02:12:18.505413    4110 default_sa.go:34] waiting for default service account to be created ...
	I0917 02:12:18.650733    4110 request.go:632] Waited for 145.255733ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:12:18.650776    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:12:18.650782    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.650793    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.650798    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.659108    4110 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 02:12:18.659203    4110 default_sa.go:45] found service account: "default"
	I0917 02:12:18.659217    4110 default_sa.go:55] duration metric: took 153.795915ms for default service account to be created ...
	I0917 02:12:18.659227    4110 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 02:12:18.851528    4110 request.go:632] Waited for 192.225662ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:18.851585    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:18.851591    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.851597    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.851600    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.855716    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:18.861599    4110 system_pods.go:86] 26 kube-system pods found
	I0917 02:12:18.861618    4110 system_pods.go:89] "coredns-7c65d6cfc9-fg65r" [1690cf49-3cd5-45ba-bcff-6c2947fb1bef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 02:12:18.861630    4110 system_pods.go:89] "coredns-7c65d6cfc9-nl5j5" [dad0f1c6-0feb-4024-b0ee-95776c68bae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 02:12:18.861635    4110 system_pods.go:89] "etcd-ha-857000" [e9da37c5-ad0f-47e0-b65f-361d2cb9f204] Running
	I0917 02:12:18.861638    4110 system_pods.go:89] "etcd-ha-857000-m02" [f540f85a-9556-44c0-a560-188721a58bd5] Running
	I0917 02:12:18.861642    4110 system_pods.go:89] "etcd-ha-857000-m03" [46cb6c7e-73e2-49b0-986f-c63a23ffa29d] Running
	I0917 02:12:18.861645    4110 system_pods.go:89] "kindnet-4jk9v" [24a018c6-9cbb-4d17-a295-8fef456534a0] Running
	I0917 02:12:18.861649    4110 system_pods.go:89] "kindnet-7pf7v" [eecd1421-3a2f-4e48-b2b2-abcbef7869e7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 02:12:18.861653    4110 system_pods.go:89] "kindnet-vc6z5" [a92e41cb-74d1-4bd4-bffd-b807c859efd3] Running
	I0917 02:12:18.861657    4110 system_pods.go:89] "kindnet-vh2h2" [00d83b6a-e87e-4c36-b073-36e67c76d67d] Running
	I0917 02:12:18.861660    4110 system_pods.go:89] "kube-apiserver-ha-857000" [b5451c9b-3be2-4454-bed6-fdc48031180e] Running
	I0917 02:12:18.861663    4110 system_pods.go:89] "kube-apiserver-ha-857000-m02" [d043a0e6-6ab2-47b6-bb82-ceff496f8336] Running
	I0917 02:12:18.861666    4110 system_pods.go:89] "kube-apiserver-ha-857000-m03" [b3d91c89-8830-4dd9-8c20-4d6f821a3d88] Running
	I0917 02:12:18.861670    4110 system_pods.go:89] "kube-controller-manager-ha-857000" [f4b0f56c-8d7d-4567-a4eb-088805c36c54] Running
	I0917 02:12:18.861673    4110 system_pods.go:89] "kube-controller-manager-ha-857000-m02" [16670a48-0dc2-4665-9b1d-5fc25337ec58] Running
	I0917 02:12:18.861677    4110 system_pods.go:89] "kube-controller-manager-ha-857000-m03" [754fdff4-0e27-4f32-ab12-3a6695924396] Running
	I0917 02:12:18.861682    4110 system_pods.go:89] "kube-proxy-528ht" [18b29dac-e4bf-4f26-988f-b1ba4019f9bc] Running
	I0917 02:12:18.861685    4110 system_pods.go:89] "kube-proxy-g9wxm" [a5b974f1-ed28-4f42-8c86-6fed0bf32317] Running
	I0917 02:12:18.861690    4110 system_pods.go:89] "kube-proxy-vskbj" [7a396757-8954-48d2-b708-dcdfbab21dc7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 02:12:18.861694    4110 system_pods.go:89] "kube-proxy-zrqvr" [bb98af15-e15a-4904-b68d-2de3c6e5864c] Running
	I0917 02:12:18.861698    4110 system_pods.go:89] "kube-scheduler-ha-857000" [fdce329c-0340-4e0a-8bc1-295e4970708b] Running
	I0917 02:12:18.861701    4110 system_pods.go:89] "kube-scheduler-ha-857000-m02" [274256f9-cd98-4dd3-befc-35443dcca033] Running
	I0917 02:12:18.861704    4110 system_pods.go:89] "kube-scheduler-ha-857000-m03" [74d4633a-1c51-478d-9c68-d05c964089d9] Running
	I0917 02:12:18.861707    4110 system_pods.go:89] "kube-vip-ha-857000" [c577f2f1-ab99-4fbe-acc1-516a135f0377] Pending
	I0917 02:12:18.861710    4110 system_pods.go:89] "kube-vip-ha-857000-m02" [b194d3e4-3b4d-4075-a6b1-f05d219393a0] Running
	I0917 02:12:18.861713    4110 system_pods.go:89] "kube-vip-ha-857000-m03" [29be8efa-f99f-460a-80bd-ccc25f608e48] Running
	I0917 02:12:18.861715    4110 system_pods.go:89] "storage-provisioner" [d81e7b55-a14e-4dc7-9193-ebe6914cdacf] Running
	I0917 02:12:18.861720    4110 system_pods.go:126] duration metric: took 202.461636ms to wait for k8s-apps to be running ...
	I0917 02:12:18.861726    4110 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 02:12:18.861778    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:12:18.882032    4110 system_svc.go:56] duration metric: took 20.298661ms WaitForService to wait for kubelet
	I0917 02:12:18.882059    4110 kubeadm.go:582] duration metric: took 18.114595178s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:12:18.882083    4110 node_conditions.go:102] verifying NodePressure condition ...
	I0917 02:12:19.052878    4110 request.go:632] Waited for 170.643294ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0917 02:12:19.052938    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0917 02:12:19.052951    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:19.052966    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:19.052976    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:19.057011    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:19.057806    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:12:19.057817    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:12:19.057824    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:12:19.057827    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:12:19.057830    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:12:19.057834    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:12:19.057837    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:12:19.057840    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:12:19.057843    4110 node_conditions.go:105] duration metric: took 175.740836ms to run NodePressure ...
	I0917 02:12:19.057851    4110 start.go:241] waiting for startup goroutines ...
	I0917 02:12:19.057867    4110 start.go:255] writing updated cluster config ...
	I0917 02:12:19.079978    4110 out.go:201] 
	I0917 02:12:19.117280    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:19.117377    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:12:19.138898    4110 out.go:177] * Starting "ha-857000-m04" worker node in "ha-857000" cluster
	I0917 02:12:19.180945    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:12:19.180969    4110 cache.go:56] Caching tarball of preloaded images
	I0917 02:12:19.181086    4110 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:12:19.181097    4110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:12:19.181167    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:12:19.181757    4110 start.go:360] acquireMachinesLock for ha-857000-m04: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:12:19.181807    4110 start.go:364] duration metric: took 37.353µs to acquireMachinesLock for "ha-857000-m04"
	I0917 02:12:19.181825    4110 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:12:19.181830    4110 fix.go:54] fixHost starting: m04
	I0917 02:12:19.182086    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:19.182106    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:19.191065    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52292
	I0917 02:12:19.191452    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:19.191850    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:19.191867    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:19.192069    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:19.192186    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:19.192279    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetState
	I0917 02:12:19.192404    4110 main.go:141] libmachine: (ha-857000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:19.192500    4110 main.go:141] libmachine: (ha-857000-m04) DBG | hyperkit pid from json: 3550
	I0917 02:12:19.193450    4110 main.go:141] libmachine: (ha-857000-m04) DBG | hyperkit pid 3550 missing from process table
	I0917 02:12:19.193488    4110 fix.go:112] recreateIfNeeded on ha-857000-m04: state=Stopped err=<nil>
	I0917 02:12:19.193498    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	W0917 02:12:19.193587    4110 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:12:19.214824    4110 out.go:177] * Restarting existing hyperkit VM for "ha-857000-m04" ...
	I0917 02:12:19.289023    4110 main.go:141] libmachine: (ha-857000-m04) Calling .Start
	I0917 02:12:19.289295    4110 main.go:141] libmachine: (ha-857000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:19.289356    4110 main.go:141] libmachine: (ha-857000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/hyperkit.pid
	I0917 02:12:19.289453    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Using UUID 32bc812d-06ce-423b-90a4-5417ea5ec912
	I0917 02:12:19.319068    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Generated MAC a:b6:8:34:25:a6
	I0917 02:12:19.319111    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:12:19.319291    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"32bc812d-06ce-423b-90a4-5417ea5ec912", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000289980)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:12:19.319339    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"32bc812d-06ce-423b-90a4-5417ea5ec912", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000289980)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:12:19.319395    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "32bc812d-06ce-423b-90a4-5417ea5ec912", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/ha-857000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machine
s/ha-857000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:12:19.319498    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 32bc812d-06ce-423b-90a4-5417ea5ec912 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/ha-857000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:12:19.319538    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:12:19.321260    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: Pid is 4161
	I0917 02:12:19.321886    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Attempt 0
	I0917 02:12:19.321908    4110 main.go:141] libmachine: (ha-857000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:19.321989    4110 main.go:141] libmachine: (ha-857000-m04) DBG | hyperkit pid from json: 4161
	I0917 02:12:19.324366    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Searching for a:b6:8:34:25:a6 in /var/db/dhcpd_leases ...
	I0917 02:12:19.324461    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:12:19.324494    4110 main.go:141] libmachine: (ha-857000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:12:19.324519    4110 main.go:141] libmachine: (ha-857000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:12:19.324537    4110 main.go:141] libmachine: (ha-857000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:12:19.324552    4110 main.go:141] libmachine: (ha-857000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94661}
	I0917 02:12:19.324565    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Found match: a:b6:8:34:25:a6
	I0917 02:12:19.324580    4110 main.go:141] libmachine: (ha-857000-m04) DBG | IP: 192.169.0.8
	I0917 02:12:19.324586    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetConfigRaw
	I0917 02:12:19.325317    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetIP
	I0917 02:12:19.325565    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:12:19.326089    4110 machine.go:93] provisionDockerMachine start ...
	I0917 02:12:19.326109    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:19.326263    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:19.326401    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:19.326560    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:19.326727    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:19.326852    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:19.327048    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:19.327215    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:19.327223    4110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:12:19.329900    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:12:19.339917    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:12:19.340861    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:12:19.340880    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:12:19.340887    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:12:19.340906    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:12:19.732737    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:12:19.732752    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:12:19.847625    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:12:19.847643    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:12:19.847688    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:12:19.847715    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:12:19.848483    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:12:19.848501    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:12:25.591852    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:12:25.591915    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:12:25.591925    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:12:25.615174    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:12:29.572071    4110 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.8:22: connect: connection refused
	I0917 02:12:32.627647    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:12:32.627664    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetMachineName
	I0917 02:12:32.627799    4110 buildroot.go:166] provisioning hostname "ha-857000-m04"
	I0917 02:12:32.627808    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetMachineName
	I0917 02:12:32.627920    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.628014    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:32.628110    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.628210    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.628294    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:32.628431    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:32.628580    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:32.628587    4110 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000-m04 && echo "ha-857000-m04" | sudo tee /etc/hostname
	I0917 02:12:32.692963    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000-m04
	
	I0917 02:12:32.692980    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.693102    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:32.693193    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.693281    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.693375    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:32.693517    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:32.693670    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:32.693680    4110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:12:32.753597    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:12:32.753613    4110 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:12:32.753629    4110 buildroot.go:174] setting up certificates
	I0917 02:12:32.753635    4110 provision.go:84] configureAuth start
	I0917 02:12:32.753642    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetMachineName
	I0917 02:12:32.753783    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetIP
	I0917 02:12:32.753886    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.753973    4110 provision.go:143] copyHostCerts
	I0917 02:12:32.754002    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:12:32.754055    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:12:32.754061    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:12:32.754199    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:12:32.754425    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:12:32.754455    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:12:32.754465    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:12:32.754535    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:12:32.754684    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:12:32.754713    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:12:32.754717    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:12:32.754781    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:12:32.754925    4110 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000-m04 san=[127.0.0.1 192.169.0.8 ha-857000-m04 localhost minikube]
	I0917 02:12:32.886815    4110 provision.go:177] copyRemoteCerts
	I0917 02:12:32.886883    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:12:32.886900    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.887049    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:32.887156    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.887265    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:32.887345    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	I0917 02:12:32.921412    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:12:32.921483    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:12:32.942093    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:12:32.942165    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 02:12:32.962202    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:12:32.962278    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:12:32.982539    4110 provision.go:87] duration metric: took 228.892121ms to configureAuth
	I0917 02:12:32.982555    4110 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:12:32.982734    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:32.982747    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:32.982882    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.982965    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:32.983053    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.983146    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.983222    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:32.983341    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:32.983471    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:32.983479    4110 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:12:33.039112    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:12:33.039126    4110 buildroot.go:70] root file system type: tmpfs
	I0917 02:12:33.039209    4110 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:12:33.039225    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:33.039356    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:33.039463    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:33.039553    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:33.039642    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:33.039765    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:33.039901    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:33.039948    4110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:12:33.105290    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:12:33.105311    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:33.105463    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:33.105557    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:33.105679    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:33.105803    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:33.106006    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:33.106166    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:33.106179    4110 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:12:34.690044    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:12:34.690061    4110 machine.go:96] duration metric: took 15.363692529s to provisionDockerMachine
	I0917 02:12:34.690069    4110 start.go:293] postStartSetup for "ha-857000-m04" (driver="hyperkit")
	I0917 02:12:34.690105    4110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:12:34.690128    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.690331    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:12:34.690344    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:34.690448    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.690550    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.690643    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.690734    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	I0917 02:12:34.729693    4110 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:12:34.733386    4110 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:12:34.733399    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:12:34.733491    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:12:34.733629    4110 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:12:34.733635    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:12:34.733801    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:12:34.743555    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:12:34.777005    4110 start.go:296] duration metric: took 86.908647ms for postStartSetup
	I0917 02:12:34.777029    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.777213    4110 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:12:34.777227    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:34.777324    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.777401    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.777484    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.777560    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	I0917 02:12:34.811015    4110 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:12:34.811085    4110 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:12:34.865249    4110 fix.go:56] duration metric: took 15.683145042s for fixHost
	I0917 02:12:34.865277    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:34.865435    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.865528    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.865626    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.865720    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.865866    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:34.866008    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:34.866017    4110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:12:34.922683    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726564355.020144093
	
	I0917 02:12:34.922697    4110 fix.go:216] guest clock: 1726564355.020144093
	I0917 02:12:34.922703    4110 fix.go:229] Guest: 2024-09-17 02:12:35.020144093 -0700 PDT Remote: 2024-09-17 02:12:34.865267 -0700 PDT m=+127.793621612 (delta=154.877093ms)
	I0917 02:12:34.922714    4110 fix.go:200] guest clock delta is within tolerance: 154.877093ms
	I0917 02:12:34.922718    4110 start.go:83] releasing machines lock for "ha-857000-m04", held for 15.740632652s
	I0917 02:12:34.922744    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.922875    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetIP
	I0917 02:12:34.945234    4110 out.go:177] * Found network options:
	I0917 02:12:34.965134    4110 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0917 02:12:34.986412    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:12:34.986446    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:12:34.986459    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:12:34.986477    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.987363    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.987619    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.987838    4110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0917 02:12:34.987863    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:12:34.987882    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	W0917 02:12:34.987901    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:12:34.987917    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:12:34.988015    4110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:12:34.988040    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:34.988144    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.988241    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.988362    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.988430    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.988562    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.988636    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.988712    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	I0917 02:12:34.988798    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	W0917 02:12:35.089466    4110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:12:35.089538    4110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:12:35.103798    4110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:12:35.103814    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:12:35.103888    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:12:35.122855    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:12:35.131456    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:12:35.140120    4110 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:12:35.140187    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:12:35.148614    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:12:35.156897    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:12:35.165192    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:12:35.173754    4110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:12:35.182471    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:12:35.191008    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:12:35.199448    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:12:35.207926    4110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:12:35.216411    4110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:12:35.228568    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:35.327014    4110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:12:35.346549    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:12:35.346628    4110 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:12:35.370011    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:12:35.382502    4110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:12:35.397499    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:12:35.408840    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:12:35.420206    4110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:12:35.442422    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:12:35.453508    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:12:35.468375    4110 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:12:35.471279    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:12:35.479407    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:12:35.492955    4110 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:12:35.593589    4110 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:12:35.695477    4110 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:12:35.695504    4110 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:12:35.710594    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:35.826600    4110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:12:38.101010    4110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.274345081s)
	I0917 02:12:38.101138    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:12:38.113882    4110 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:12:38.128373    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:12:38.140107    4110 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:12:38.249684    4110 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:12:38.361672    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:38.469978    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:12:38.489760    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:12:38.502395    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:38.604591    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:12:38.669590    4110 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:12:38.669684    4110 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:12:38.674420    4110 start.go:563] Will wait 60s for crictl version
	I0917 02:12:38.674483    4110 ssh_runner.go:195] Run: which crictl
	I0917 02:12:38.677707    4110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:12:38.702126    4110 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:12:38.702225    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:12:38.719390    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:12:38.757457    4110 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:12:38.799117    4110 out.go:177]   - env NO_PROXY=192.169.0.5
	I0917 02:12:38.819990    4110 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0917 02:12:38.841085    4110 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	I0917 02:12:38.862007    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetIP
	I0917 02:12:38.862240    4110 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:12:38.865326    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:12:38.874823    4110 mustload.go:65] Loading cluster: ha-857000
	I0917 02:12:38.875009    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:38.875239    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:38.875265    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:38.884252    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52315
	I0917 02:12:38.884596    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:38.885007    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:38.885024    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:38.885217    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:38.885327    4110 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:12:38.885411    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:38.885502    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 4124
	I0917 02:12:38.886472    4110 host.go:66] Checking if "ha-857000" exists ...
	I0917 02:12:38.886740    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:38.886764    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:38.895399    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52317
	I0917 02:12:38.895752    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:38.896084    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:38.896095    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:38.896312    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:38.896445    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:12:38.896532    4110 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.8
	I0917 02:12:38.896538    4110 certs.go:194] generating shared ca certs ...
	I0917 02:12:38.896550    4110 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:12:38.896701    4110 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:12:38.896754    4110 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:12:38.896764    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:12:38.896789    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:12:38.896809    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:12:38.896826    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:12:38.896910    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:12:38.896963    4110 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:12:38.896974    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:12:38.897008    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:12:38.897042    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:12:38.897070    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:12:38.897139    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:12:38.897176    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:12:38.897196    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:12:38.897214    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:38.897242    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:12:38.917488    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:12:38.937120    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:12:38.956856    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:12:38.976762    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:12:38.997198    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:12:39.018037    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:12:39.040033    4110 ssh_runner.go:195] Run: openssl version
	I0917 02:12:39.044757    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:12:39.053844    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:12:39.057290    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:12:39.057337    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:12:39.061592    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:12:39.070092    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:12:39.078554    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:39.082016    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:39.082086    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:39.086282    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:12:39.094779    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:12:39.103890    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:12:39.107498    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:12:39.107551    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:12:39.111799    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:12:39.120941    4110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:12:39.124549    4110 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 02:12:39.124586    4110 kubeadm.go:934] updating node {m04 192.169.0.8 0 v1.31.1 docker false true} ...
	I0917 02:12:39.124645    4110 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:12:39.124713    4110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:12:39.132685    4110 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:12:39.132752    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0917 02:12:39.140189    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 02:12:39.153737    4110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:12:39.167480    4110 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:12:39.170335    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:12:39.180131    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:39.274978    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:12:39.290344    4110 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0917 02:12:39.290539    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:39.312606    4110 out.go:177] * Verifying Kubernetes components...
	I0917 02:12:39.332523    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:39.447567    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:12:39.466307    4110 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:12:39.466524    4110 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x673a720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 02:12:39.466571    4110 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0917 02:12:39.467449    4110 node_ready.go:35] waiting up to 6m0s for node "ha-857000-m04" to be "Ready" ...
	I0917 02:12:39.467568    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:12:39.467575    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.467585    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.467591    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.470632    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:39.969561    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:12:39.969576    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.969585    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.969590    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.972203    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:39.972562    4110 node_ready.go:49] node "ha-857000-m04" has status "Ready":"True"
	I0917 02:12:39.972573    4110 node_ready.go:38] duration metric: took 505.091961ms for node "ha-857000-m04" to be "Ready" ...
	I0917 02:12:39.972579    4110 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:12:39.972614    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:39.972619    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.972625    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.972629    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.976988    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:39.982728    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:39.982773    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:39.982778    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.982795    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.982801    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.985018    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:39.985518    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:39.985526    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.985532    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.985536    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.987300    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:40.482877    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:40.482889    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:40.482894    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:40.482898    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:40.485392    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:40.485952    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:40.485960    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:40.485965    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:40.485972    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:40.487726    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:40.984290    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:40.984330    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:40.984337    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:40.984340    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:40.986636    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:40.987126    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:40.987134    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:40.987140    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:40.987144    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:40.989077    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:41.483798    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:41.483813    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:41.483838    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:41.483842    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:41.485913    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:41.486349    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:41.486357    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:41.486363    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:41.486366    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:41.487997    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:41.984399    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:41.984423    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:41.984436    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:41.984441    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:41.987692    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:41.988563    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:41.988571    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:41.988576    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:41.988580    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:41.990387    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:41.990837    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:42.483597    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:42.483651    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:42.483720    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:42.483731    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:42.486451    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:42.487002    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:42.487009    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:42.487015    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:42.487019    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:42.488735    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:42.984178    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:42.984202    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:42.984244    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:42.984250    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:42.987573    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:42.988040    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:42.988049    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:42.988056    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:42.988060    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:42.989664    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:43.484870    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:43.484884    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:43.484891    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:43.484894    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:43.487141    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:43.487687    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:43.487695    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:43.487701    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:43.487705    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:43.489384    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:43.985004    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:43.985028    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:43.985040    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:43.985047    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:43.988376    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:43.989251    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:43.989258    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:43.989264    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:43.989274    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:43.991010    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:43.991366    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:44.483323    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:44.483341    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:44.483350    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:44.483355    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:44.486151    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:44.486714    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:44.486722    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:44.486727    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:44.486732    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:44.488452    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:44.984530    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:44.984557    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:44.984569    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:44.984574    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:44.987518    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:44.988156    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:44.988163    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:44.988169    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:44.988173    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:44.989906    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:45.484413    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:45.484429    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:45.484436    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:45.484438    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:45.486664    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:45.487158    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:45.487166    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:45.487172    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:45.487180    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:45.488811    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:45.983568    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:45.983588    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:45.983597    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:45.983601    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:45.986094    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:45.986663    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:45.986670    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:45.986676    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:45.986681    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:45.988390    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:46.484237    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:46.484252    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:46.484258    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:46.484262    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:46.486548    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:46.487112    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:46.487120    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:46.487126    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:46.487130    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:46.488764    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:46.489074    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:46.984666    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:46.984685    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:46.984693    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:46.984699    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:46.987277    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:46.987747    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:46.987754    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:46.987760    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:46.987764    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:46.989871    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:47.483189    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:47.483204    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:47.483220    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:47.483225    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:47.485536    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:47.486040    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:47.486048    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:47.486053    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:47.486077    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:47.487968    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:47.983218    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:47.983261    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:47.983271    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:47.983276    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:47.985959    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:47.986467    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:47.986476    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:47.986480    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:47.986483    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:47.988256    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:48.483839    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:48.483855    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:48.483877    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:48.483881    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:48.486127    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:48.486742    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:48.486750    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:48.486756    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:48.486763    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:48.488482    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:48.983104    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:48.983116    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:48.983123    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:48.983126    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:48.986541    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:48.986974    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:48.986982    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:48.986988    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:48.987000    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:48.989572    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:48.989840    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:49.483113    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:49.483127    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:49.483135    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:49.483138    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:49.485418    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:49.485944    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:49.485952    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:49.485958    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:49.485965    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:49.488051    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:49.983392    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:49.983418    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:49.983430    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:49.983435    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:49.990100    4110 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 02:12:49.990521    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:49.990528    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:49.990534    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:49.990551    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:49.995841    4110 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 02:12:50.484489    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:50.484507    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:50.484516    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:50.484519    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:50.487282    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:50.487803    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:50.487815    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:50.487821    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:50.487826    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:50.489538    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:50.984752    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:50.984776    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:50.984788    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:50.984796    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:50.988059    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:50.988580    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:50.988587    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:50.988593    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:50.988597    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:50.990162    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:50.990537    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:51.483827    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:51.483847    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:51.483864    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:51.483902    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:51.487323    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:51.487924    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:51.487932    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:51.487937    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:51.487942    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:51.489844    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:51.983451    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:51.983470    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:51.983482    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:51.983488    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:51.986994    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:51.987525    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:51.987535    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:51.987543    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:51.987548    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:51.989115    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:52.483263    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:52.483288    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:52.483325    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:52.483332    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:52.486347    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:52.486988    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:52.486995    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:52.487001    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:52.487005    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:52.488688    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:52.983765    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:52.983790    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:52.983801    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:52.983810    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:52.986675    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:52.987089    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:52.987119    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:52.987125    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:52.987129    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:52.988627    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:53.484927    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:53.484941    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:53.484948    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:53.484951    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:53.487216    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:53.487660    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:53.487667    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:53.487673    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:53.487676    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:53.489219    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:53.489560    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:53.984242    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:53.984261    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:53.984274    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:53.984280    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:53.986802    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:53.987318    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:53.987326    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:53.987333    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:53.987336    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:53.989152    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:54.483277    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:54.483309    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:54.483353    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:54.483368    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:54.486304    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:54.486703    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:54.486709    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:54.486715    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:54.486718    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:54.488409    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:54.984401    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:54.984421    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:54.984432    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:54.984436    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:54.987150    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:54.987731    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:54.987739    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:54.987745    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:54.987762    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:54.990093    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:55.484219    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:55.484245    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:55.484263    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:55.484270    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:55.487478    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:55.488038    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:55.488046    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:55.488052    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:55.488055    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:55.489736    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:55.490063    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:55.983721    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:55.983738    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:55.983747    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:55.983751    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:55.986467    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:55.986910    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:55.986918    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:55.986924    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:55.986927    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:55.988668    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:56.483680    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:56.483698    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:56.483705    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:56.483708    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:56.486006    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:56.486509    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:56.486517    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:56.486523    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:56.486526    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:56.488267    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:56.984953    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:56.984979    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:56.984991    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:56.984998    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:56.988958    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:56.989556    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:56.989567    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:56.989575    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:56.989580    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:56.991555    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:57.483204    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:57.483220    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.483244    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.483257    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.489651    4110 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 02:12:57.491669    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.491685    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.491693    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.491697    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.500745    4110 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0917 02:12:57.502366    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.502386    4110 pod_ready.go:82] duration metric: took 17.519343583s for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.502398    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.502483    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nl5j5
	I0917 02:12:57.502497    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.502507    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.502512    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.512509    4110 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0917 02:12:57.513793    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.513807    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.513817    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.513823    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.522244    4110 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 02:12:57.522585    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.522595    4110 pod_ready.go:82] duration metric: took 20.190892ms for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.522609    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.522650    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000
	I0917 02:12:57.522656    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.522662    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.522666    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.527526    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:57.528075    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.528084    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.528089    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.528100    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.530647    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:57.531009    4110 pod_ready.go:93] pod "etcd-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.531019    4110 pod_ready.go:82] duration metric: took 8.403704ms for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.531025    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.531068    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m02
	I0917 02:12:57.531073    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.531082    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.531087    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.533324    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:57.533687    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:57.533694    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.533700    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.533704    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.535601    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:57.535875    4110 pod_ready.go:93] pod "etcd-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.535883    4110 pod_ready.go:82] duration metric: took 4.853562ms for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.535902    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.535944    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m03
	I0917 02:12:57.535950    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.535956    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.535960    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.537587    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:57.537964    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:57.537972    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.537978    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.537982    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.539462    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:57.539797    4110 pod_ready.go:93] pod "etcd-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.539805    4110 pod_ready.go:82] duration metric: took 3.894392ms for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.539816    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.684040    4110 request.go:632] Waited for 144.185674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:12:57.684081    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:12:57.684104    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.684125    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.684132    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.686547    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:57.883303    4110 request.go:632] Waited for 196.17665ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.883378    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.883388    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.883398    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.883406    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.886942    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:57.887555    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.887569    4110 pod_ready.go:82] duration metric: took 347.737487ms for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.887576    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.083903    4110 request.go:632] Waited for 196.258589ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:12:58.084076    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:12:58.084095    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.084104    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.084111    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.087323    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:58.284752    4110 request.go:632] Waited for 196.829301ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:58.284841    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:58.284851    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.284863    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.284871    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.287836    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:58.288234    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:58.288243    4110 pod_ready.go:82] duration metric: took 400.655079ms for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.288251    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.484581    4110 request.go:632] Waited for 196.285151ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:58.484627    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:58.484634    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.484670    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.484676    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.487401    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:58.683590    4110 request.go:632] Waited for 195.669934ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:58.683635    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:58.683643    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.683695    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.683709    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.687024    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:58.687397    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:58.687407    4110 pod_ready.go:82] duration metric: took 399.144074ms for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.687414    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.884795    4110 request.go:632] Waited for 197.34012ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:12:58.884845    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:12:58.884854    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.884862    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.884886    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.887327    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:59.083807    4110 request.go:632] Waited for 195.949253ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:59.083945    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:59.083961    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.083973    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.083980    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.087431    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:59.087851    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:59.087864    4110 pod_ready.go:82] duration metric: took 400.438219ms for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.087874    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.283487    4110 request.go:632] Waited for 195.551174ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:12:59.283570    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:12:59.283587    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.283598    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.283604    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.286668    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:59.483240    4110 request.go:632] Waited for 196.050684ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:59.483272    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:59.483277    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.483284    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.483287    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.485481    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:59.485790    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:59.485799    4110 pod_ready.go:82] duration metric: took 397.912163ms for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.485808    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.684196    4110 request.go:632] Waited for 198.346846ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:59.684283    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:59.684289    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.684295    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.684299    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.686349    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:59.883921    4110 request.go:632] Waited for 197.130794ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:59.883972    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:59.883980    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.884030    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.884039    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.888316    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:59.888770    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:59.888788    4110 pod_ready.go:82] duration metric: took 402.964156ms for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.888815    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.083631    4110 request.go:632] Waited for 194.730555ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:13:00.083713    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:13:00.083720    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.083728    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.083732    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.086353    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:00.285261    4110 request.go:632] Waited for 198.400376ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:13:00.285348    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:13:00.285356    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.285364    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.285370    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.287853    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:00.288149    4110 pod_ready.go:93] pod "kube-proxy-528ht" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:00.288159    4110 pod_ready.go:82] duration metric: took 399.322905ms for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.288167    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.484621    4110 request.go:632] Waited for 196.39101ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:13:00.484716    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:13:00.484727    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.484737    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.484744    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.488045    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:00.685321    4110 request.go:632] Waited for 196.686181ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:13:00.685381    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:13:00.685438    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.685455    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.685464    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.688919    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:00.689362    4110 pod_ready.go:93] pod "kube-proxy-g9wxm" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:00.689374    4110 pod_ready.go:82] duration metric: took 401.194339ms for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.689383    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.884950    4110 request.go:632] Waited for 195.521785ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:13:00.884994    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:13:00.885018    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.885025    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.885034    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.887231    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:01.084761    4110 request.go:632] Waited for 197.012037ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:13:01.084795    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:13:01.084800    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.084806    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.084810    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.088892    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:13:01.089243    4110 pod_ready.go:93] pod "kube-proxy-vskbj" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:01.089253    4110 pod_ready.go:82] duration metric: took 399.857039ms for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.089261    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.284602    4110 request.go:632] Waited for 195.290385ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:13:01.284640    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:13:01.284645    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.284672    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.284680    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.286636    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:13:01.483312    4110 request.go:632] Waited for 196.269648ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:13:01.483391    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:13:01.483403    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.483413    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.483434    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.486551    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:01.486934    4110 pod_ready.go:93] pod "kube-proxy-zrqvr" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:01.486943    4110 pod_ready.go:82] duration metric: took 397.670619ms for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.486950    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.683659    4110 request.go:632] Waited for 196.646108ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:13:01.683796    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:13:01.683807    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.683819    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.683825    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.686996    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:01.884224    4110 request.go:632] Waited for 196.55945ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:13:01.884363    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:13:01.884374    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.884385    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.884393    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.888135    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:01.888538    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:01.888551    4110 pod_ready.go:82] duration metric: took 401.588084ms for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.888559    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:02.083387    4110 request.go:632] Waited for 194.732026ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:13:02.083482    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:13:02.083493    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.083503    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.083512    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.087127    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:02.284704    4110 request.go:632] Waited for 197.205174ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:13:02.284756    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:13:02.284761    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.284768    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.284773    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.287752    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:02.288038    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:02.288049    4110 pod_ready.go:82] duration metric: took 399.476957ms for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:02.288056    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:02.485154    4110 request.go:632] Waited for 197.02881ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:13:02.485191    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:13:02.485198    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.485206    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.485211    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.487672    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:02.685336    4110 request.go:632] Waited for 197.331043ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:13:02.685388    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:13:02.685397    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.685411    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.685417    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.688565    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:02.688910    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:02.688918    4110 pod_ready.go:82] duration metric: took 400.85077ms for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:02.688929    4110 pod_ready.go:39] duration metric: took 22.715951136s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:13:02.688942    4110 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 02:13:02.689000    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:13:02.699631    4110 system_svc.go:56] duration metric: took 10.684367ms WaitForService to wait for kubelet
	I0917 02:13:02.699646    4110 kubeadm.go:582] duration metric: took 23.408872965s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:13:02.699663    4110 node_conditions.go:102] verifying NodePressure condition ...
	I0917 02:13:02.884773    4110 request.go:632] Waited for 185.024169ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0917 02:13:02.884858    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0917 02:13:02.884867    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.884878    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.884887    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.888704    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:02.889505    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:13:02.889516    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:13:02.889528    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:13:02.889534    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:13:02.889537    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:13:02.889540    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:13:02.889543    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:13:02.889545    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:13:02.889549    4110 node_conditions.go:105] duration metric: took 189.878189ms to run NodePressure ...
	I0917 02:13:02.889557    4110 start.go:241] waiting for startup goroutines ...
	I0917 02:13:02.889572    4110 start.go:255] writing updated cluster config ...
	I0917 02:13:02.889954    4110 ssh_runner.go:195] Run: rm -f paused
	I0917 02:13:02.930446    4110 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0917 02:13:02.983109    4110 out.go:201] 
	W0917 02:13:03.020673    4110 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0917 02:13:03.057789    4110 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0917 02:13:03.135680    4110 out.go:177] * Done! kubectl is now configured to use "ha-857000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 17 09:12:18 ha-857000 cri-dockerd[1413]: time="2024-09-17T09:12:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc1d198ffe0b277108e5042e64604ceb887f7f17cd5814893fbf789f1ce180f0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.316039322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.316201907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.316216597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.316284213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.356401685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.356591613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.356646706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.356901392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.358210462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.358271414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.358284287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.358347315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.361819988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.361879924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.361892293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.361954784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:48 ha-857000 dockerd[1160]: time="2024-09-17T09:12:48.289404793Z" level=info msg="ignoring event" container=67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 09:12:48 ha-857000 dockerd[1166]: time="2024-09-17T09:12:48.290629069Z" level=info msg="shim disconnected" id=67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7 namespace=moby
	Sep 17 09:12:48 ha-857000 dockerd[1166]: time="2024-09-17T09:12:48.290966877Z" level=warning msg="cleaning up after shim disconnected" id=67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7 namespace=moby
	Sep 17 09:12:48 ha-857000 dockerd[1166]: time="2024-09-17T09:12:48.291008241Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 09:13:00 ha-857000 dockerd[1166]: time="2024-09-17T09:13:00.269678049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:13:00 ha-857000 dockerd[1166]: time="2024-09-17T09:13:00.269745426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:13:00 ha-857000 dockerd[1166]: time="2024-09-17T09:13:00.269758363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:13:00 ha-857000 dockerd[1166]: time="2024-09-17T09:13:00.269841312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d940d576a500a       6e38f40d628db                                                                                         9 seconds ago        Running             storage-provisioner       2                   6fb8068a5c29f       storage-provisioner
	119f2deb32f13       8c811b4aec35f                                                                                         51 seconds ago       Running             busybox                   1                   fc1d198ffe0b2       busybox-7dff88458-4jzg8
	b7aa83ae3a822       c69fa2e9cbf5f                                                                                         51 seconds ago       Running             coredns                   1                   f4e7a7b3c65e5       coredns-7c65d6cfc9-nl5j5
	c37a677e31180       60c005f310ff3                                                                                         51 seconds ago       Running             kube-proxy                1                   5294422217d99       kube-proxy-vskbj
	3d889c7c8da7e       12968670680f4                                                                                         51 seconds ago       Running             kindnet-cni               1                   80326e6e99372       kindnet-7pf7v
	7b8b62bf7340c       c69fa2e9cbf5f                                                                                         51 seconds ago       Running             coredns                   1                   f4cf87ea66207       coredns-7c65d6cfc9-fg65r
	67814a4514b10       6e38f40d628db                                                                                         52 seconds ago       Exited              storage-provisioner       1                   6fb8068a5c29f       storage-provisioner
	ca7fe8ccd4c53       175ffd71cce3d                                                                                         About a minute ago   Running             kube-controller-manager   6                   77f536a07a3a6       kube-controller-manager-ha-857000
	475dedee37228       6bab7719df100                                                                                         About a minute ago   Running             kube-apiserver            6                   0968090389d54       kube-apiserver-ha-857000
	37d6d6479e30b       38af8ddebf499                                                                                         2 minutes ago        Running             kube-vip                  1                   2842ed202c474       kube-vip-ha-857000
	00ff29c213716       9aa1fad941575                                                                                         2 minutes ago        Running             kube-scheduler            2                   309841a63d772       kube-scheduler-ha-857000
	13b7f8a93ad49       175ffd71cce3d                                                                                         2 minutes ago        Exited              kube-controller-manager   5                   77f536a07a3a6       kube-controller-manager-ha-857000
	8c0804e78de8f       2e96e5913fc06                                                                                         2 minutes ago        Running             etcd                      2                   6cfb11ed1d6ba       etcd-ha-857000
	a18a6b023cd60       6bab7719df100                                                                                         2 minutes ago        Exited              kube-apiserver            5                   0968090389d54       kube-apiserver-ha-857000
	034279696db8f       38af8ddebf499                                                                                         6 minutes ago        Exited              kube-vip                  0                   4205e70bfa1bb       kube-vip-ha-857000
	d9fae1497b048       9aa1fad941575                                                                                         6 minutes ago        Exited              kube-scheduler            1                   37d9fe68f2e59       kube-scheduler-ha-857000
	f4f59b8c76404       2e96e5913fc06                                                                                         6 minutes ago        Exited              etcd                      1                   a23094a650513       etcd-ha-857000
	fe908ac73b00f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   9 minutes ago        Exited              busybox                   0                   80864159ef38e       busybox-7dff88458-4jzg8
	521527f17691c       c69fa2e9cbf5f                                                                                         11 minutes ago       Exited              coredns                   0                   aa21641a5b16e       coredns-7c65d6cfc9-nl5j5
	f991c8e956d90       c69fa2e9cbf5f                                                                                         11 minutes ago       Exited              coredns                   0                   da08087b51cd9       coredns-7c65d6cfc9-fg65r
	5d84a01abd3e7       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              12 minutes ago       Exited              kindnet-cni               0                   38db6fab73655       kindnet-7pf7v
	0b03e5e488939       60c005f310ff3                                                                                         12 minutes ago       Exited              kube-proxy                0                   067bc1b2ad7fa       kube-proxy-vskbj
	
	
	==> coredns [521527f17691] <==
	[INFO] 10.244.2.2:33230 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100028s
	[INFO] 10.244.2.2:37727 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077614s
	[INFO] 10.244.2.2:51233 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090375s
	[INFO] 10.244.1.2:43082 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115984s
	[INFO] 10.244.1.2:45048 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000071244s
	[INFO] 10.244.1.2:48877 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106601s
	[INFO] 10.244.1.2:59235 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000068348s
	[INFO] 10.244.1.2:53808 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064222s
	[INFO] 10.244.1.2:54982 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064992s
	[INFO] 10.244.0.4:59177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012236s
	[INFO] 10.244.0.4:59468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096608s
	[INFO] 10.244.0.4:49953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108018s
	[INFO] 10.244.2.2:36658 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081427s
	[INFO] 10.244.1.2:53166 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140458s
	[INFO] 10.244.1.2:60442 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069729s
	[INFO] 10.244.0.4:60564 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007076s
	[INFO] 10.244.0.4:57696 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000125726s
	[INFO] 10.244.2.2:33447 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114855s
	[INFO] 10.244.2.2:49647 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000058138s
	[INFO] 10.244.2.2:55869 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00009725s
	[INFO] 10.244.1.2:49826 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096631s
	[INFO] 10.244.1.2:33376 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000046366s
	[INFO] 10.244.1.2:47184 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000084276s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7b8b62bf7340] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40424 - 46793 "HINFO IN 2652948645074262826.4033840954787183129. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019948501s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[345670875]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.718) (total time: 30000ms):
	Trace[345670875]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (09:12:48.718)
	Trace[345670875]: [30.000647992s] [30.000647992s] END
	[INFO] plugin/kubernetes: Trace[990255223]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.716) (total time: 30002ms):
	Trace[990255223]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (09:12:48.718)
	Trace[990255223]: [30.002122547s] [30.002122547s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1561533284]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.716) (total time: 30004ms):
	Trace[1561533284]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (09:12:48.720)
	Trace[1561533284]: [30.004471134s] [30.004471134s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [b7aa83ae3a82] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48468 - 41934 "HINFO IN 5248560894606224369.8303849678443807322. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019682687s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[134011415]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.720) (total time: 30000ms):
	Trace[134011415]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (09:12:48.721)
	Trace[134011415]: [30.000772699s] [30.000772699s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1931337556]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.720) (total time: 30001ms):
	Trace[1931337556]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (09:12:48.721)
	Trace[1931337556]: [30.001621273s] [30.001621273s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2093896532]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.720) (total time: 30001ms):
	Trace[2093896532]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (09:12:48.721)
	Trace[2093896532]: [30.001436763s] [30.001436763s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [f991c8e956d9] <==
	[INFO] 10.244.1.2:36169 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206963s
	[INFO] 10.244.1.2:33814 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000088589s
	[INFO] 10.244.1.2:57385 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.000535008s
	[INFO] 10.244.0.4:54856 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135529s
	[INFO] 10.244.0.4:47831 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.019088159s
	[INFO] 10.244.0.4:46325 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201714s
	[INFO] 10.244.0.4:45239 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000255383s
	[INFO] 10.244.0.4:55042 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000141827s
	[INFO] 10.244.2.2:47888 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108401s
	[INFO] 10.244.2.2:41486 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00044994s
	[INFO] 10.244.2.2:50623 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082841s
	[INFO] 10.244.1.2:54143 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098038s
	[INFO] 10.244.1.2:38802 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046632s
	[INFO] 10.244.0.4:39532 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.002579505s
	[INFO] 10.244.2.2:53978 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077749s
	[INFO] 10.244.2.2:60710 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092889s
	[INFO] 10.244.2.2:51255 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000044117s
	[INFO] 10.244.1.2:36996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056219s
	[INFO] 10.244.1.2:39487 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090704s
	[INFO] 10.244.0.4:39831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131192s
	[INFO] 10.244.0.4:35770 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154922s
	[INFO] 10.244.2.2:45820 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113973s
	[INFO] 10.244.1.2:44519 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120184s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-857000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-857000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=ha-857000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T02_00_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:00:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:13:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:00:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:00:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:00:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:01:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-857000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 54854ca4cf93431694d9ad27a68ef89d
	  System UUID:                f6fb40b6-0000-0000-91c0-dbf4ea1b682c
	  Boot ID:                    a1af0517-f4c2-4eae-96db-f7479d049a6a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4jzg8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m36s
	  kube-system                 coredns-7c65d6cfc9-fg65r             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-7c65d6cfc9-nl5j5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-ha-857000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-7pf7v                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-857000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-857000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-vskbj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-857000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-857000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 50s                    kube-proxy       
	  Normal  Starting                 12m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                    kubelet          Node ha-857000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                    kubelet          Node ha-857000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                    kubelet          Node ha-857000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  NodeReady                11m                    kubelet          Node ha-857000 status is now: NodeReady
	  Normal  RegisteredNode           11m                    node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  RegisteredNode           9m57s                  node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  RegisteredNode           7m48s                  node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  NodeHasSufficientMemory  2m24s (x8 over 2m24s)  kubelet          Node ha-857000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m24s (x8 over 2m24s)  kubelet          Node ha-857000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m24s (x7 over 2m24s)  kubelet          Node ha-857000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           90s                    node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  RegisteredNode           89s                    node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  RegisteredNode           61s                    node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	
	
	Name:               ha-857000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-857000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=ha-857000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T02_01_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:01:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:13:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:01:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:01:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:01:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:11:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-857000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 39fe1ffb0a9e4afb9fa3c09c6b13fed7
	  System UUID:                19404b28-0000-0000-842d-d4858a62cbd3
	  Boot ID:                    625329b0-bed9-4da5-90fd-2859c5b852dc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mhjf6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	  kube-system                 etcd-ha-857000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-vh2h2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-857000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-857000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-zrqvr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-857000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-857000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 87s                  kube-proxy       
	  Normal   Starting                 7m52s                kube-proxy       
	  Normal   Starting                 11m                  kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)    kubelet          Node ha-857000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)    kubelet          Node ha-857000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)    kubelet          Node ha-857000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                  node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   RegisteredNode           11m                  node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   RegisteredNode           9m58s                node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   Starting                 7m57s                kubelet          Starting kubelet.
	  Warning  Rebooted                 7m57s                kubelet          Node ha-857000-m02 has been rebooted, boot id: b4c87c19-d878-45a1-b0c5-442ae4d2861b
	  Normal   NodeHasSufficientPID     7m57s                kubelet          Node ha-857000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  7m57s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m57s                kubelet          Node ha-857000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m57s                kubelet          Node ha-857000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           7m49s                node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   Starting                 103s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  103s (x8 over 103s)  kubelet          Node ha-857000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    103s (x8 over 103s)  kubelet          Node ha-857000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     103s (x7 over 103s)  kubelet          Node ha-857000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           91s                  node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   RegisteredNode           90s                  node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   RegisteredNode           62s                  node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	
	
	Name:               ha-857000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-857000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=ha-857000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T02_03_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:03:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:13:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:12:01 +0000   Tue, 17 Sep 2024 09:03:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:12:01 +0000   Tue, 17 Sep 2024 09:03:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:12:01 +0000   Tue, 17 Sep 2024 09:03:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:12:01 +0000   Tue, 17 Sep 2024 09:03:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-857000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 69dae176c7914316a8660d135e30666c
	  System UUID:                3d8f47ea-0000-0000-a80b-a24a99cad96e
	  Boot ID:                    e1620cd8-3a62-4426-a27e-eeaf7b39756d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5x9l8                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	  kube-system                 etcd-ha-857000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-vc6z5                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-857000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-857000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-g9wxm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-857000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-857000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 65s                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-857000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-857000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-857000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           9m58s              node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           7m49s              node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           91s                node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           90s                node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   Starting                 70s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  69s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  69s                kubelet          Node ha-857000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s                kubelet          Node ha-857000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s                kubelet          Node ha-857000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 69s                kubelet          Node ha-857000-m03 has been rebooted, boot id: e1620cd8-3a62-4426-a27e-eeaf7b39756d
	  Normal   RegisteredNode           62s                node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	
	
	Name:               ha-857000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-857000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=ha-857000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T02_04_06_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:04:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:12:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:12:39 +0000   Tue, 17 Sep 2024 09:12:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:12:39 +0000   Tue, 17 Sep 2024 09:12:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:12:39 +0000   Tue, 17 Sep 2024 09:12:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:12:39 +0000   Tue, 17 Sep 2024 09:12:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-857000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 15c3f15f82fe4af0a76f2083dcf53a13
	  System UUID:                32bc423b-0000-0000-90a4-5417ea5ec912
	  Boot ID:                    cd10fc3d-989b-457a-8925-881b38fed37e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4jk9v       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m5s
	  kube-system                 kube-proxy-528ht    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 8m58s                kube-proxy       
	  Normal   Starting                 28s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m5s (x2 over 9m5s)  kubelet          Node ha-857000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m5s (x2 over 9m5s)  kubelet          Node ha-857000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m5s (x2 over 9m5s)  kubelet          Node ha-857000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m4s                 node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           9m3s                 node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           9m3s                 node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   NodeReady                8m42s                kubelet          Node ha-857000-m04 status is now: NodeReady
	  Normal   RegisteredNode           7m49s                node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           91s                  node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           90s                  node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           62s                  node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   NodeNotReady             51s                  node-controller  Node ha-857000-m04 status is now: NodeNotReady
	  Normal   Starting                 31s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  31s (x3 over 31s)    kubelet          Node ha-857000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    31s (x3 over 31s)    kubelet          Node ha-857000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     31s (x3 over 31s)    kubelet          Node ha-857000-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 31s (x2 over 31s)    kubelet          Node ha-857000-m04 has been rebooted, boot id: cd10fc3d-989b-457a-8925-881b38fed37e
	  Normal   NodeReady                31s (x2 over 31s)    kubelet          Node ha-857000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035828] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007970] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.690889] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006763] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.660573] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.226234] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.530337] systemd-fstab-generator[461]: Ignoring "noauto" option for root device
	[  +0.102427] systemd-fstab-generator[473]: Ignoring "noauto" option for root device
	[  +1.905407] systemd-fstab-generator[1088]: Ignoring "noauto" option for root device
	[  +0.264183] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +0.055811] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.051134] systemd-fstab-generator[1138]: Ignoring "noauto" option for root device
	[  +0.114709] systemd-fstab-generator[1152]: Ignoring "noauto" option for root device
	[  +2.420834] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.093862] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	[  +0.101457] systemd-fstab-generator[1390]: Ignoring "noauto" option for root device
	[  +0.112591] systemd-fstab-generator[1405]: Ignoring "noauto" option for root device
	[  +0.460313] systemd-fstab-generator[1565]: Ignoring "noauto" option for root device
	[  +6.769000] kauditd_printk_skb: 212 callbacks suppressed
	[Sep17 09:11] kauditd_printk_skb: 40 callbacks suppressed
	[Sep17 09:12] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [8c0804e78de8] <==
	{"level":"warn","ts":"2024-09-17T09:11:52.642983Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T09:11:52.663148Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T09:11:52.743398Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T09:11:52.834019Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"4843c5334ac100b7","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:11:52.834231Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"4843c5334ac100b7","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:11:56.836371Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"4843c5334ac100b7","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:11:56.836465Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"4843c5334ac100b7","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:11:57.474154Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:11:57.474326Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:12:00.837987Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"4843c5334ac100b7","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:12:00.838171Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"4843c5334ac100b7","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:12:02.474909Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:12:02.474924Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-17T09:12:02.527934Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:12:02.528179Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:12:02.553614Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:12:02.656074Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"4843c5334ac100b7","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-17T09:12:02.656117Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:12:02.671567Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"4843c5334ac100b7","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-17T09:12:02.671803Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4843c5334ac100b7"}
	{"level":"info","ts":"2024-09-17T09:12:03.645158Z","caller":"traceutil/trace.go:171","msg":"trace[1621339428] linearizableReadLoop","detail":"{readStateIndex:2219; appliedIndex:2219; }","duration":"123.347982ms","start":"2024-09-17T09:12:03.521794Z","end":"2024-09-17T09:12:03.645142Z","steps":["trace[1621339428] 'read index received'  (duration: 123.341929ms)","trace[1621339428] 'applied index is now lower than readState.Index'  (duration: 4.903µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-17T09:12:03.645527Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.681467ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-g9wxm\" ","response":"range_response_count:1 size:5191"}
	{"level":"info","ts":"2024-09-17T09:12:03.645594Z","caller":"traceutil/trace.go:171","msg":"trace[2012729741] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-g9wxm; range_end:; response_count:1; response_revision:1897; }","duration":"123.79703ms","start":"2024-09-17T09:12:03.521791Z","end":"2024-09-17T09:12:03.645588Z","steps":["trace[2012729741] 'agreement among raft nodes before linearized reading'  (duration: 123.482937ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T09:12:03.647767Z","caller":"traceutil/trace.go:171","msg":"trace[1450641964] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1898; }","duration":"121.988853ms","start":"2024-09-17T09:12:03.525765Z","end":"2024-09-17T09:12:03.647754Z","steps":["trace[1450641964] 'process raft request'  (duration: 121.923204ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T09:13:08.579639Z","caller":"traceutil/trace.go:171","msg":"trace[2135392401] transaction","detail":"{read_only:false; response_revision:2205; number_of_response:1; }","duration":"108.477653ms","start":"2024-09-17T09:13:08.471150Z","end":"2024-09-17T09:13:08.579628Z","steps":["trace[2135392401] 'process raft request'  (duration: 108.403212ms)"],"step_count":1}
	
	
	==> etcd [f4f59b8c7640] <==
	{"level":"info","ts":"2024-09-17T09:10:21.875702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:21.875849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:21.875892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:21.875927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:21.875956Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"warn","ts":"2024-09-17T09:10:23.692511Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275704,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:10:24.194017Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275704,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:10:24.278276Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ff70cdb626651bff","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:10:24.278324Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ff70cdb626651bff","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:10:24.301258Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-17T09:10:24.301488Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: no route to host"}
	{"level":"info","ts":"2024-09-17T09:10:24.470887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:24.470976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:24.470995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:24.471013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:24.471022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"warn","ts":"2024-09-17T09:10:24.694867Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275704,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:10:24.938557Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.746471868s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T09:10:24.938607Z","caller":"traceutil/trace.go:171","msg":"trace[802347161] range","detail":"{range_begin:; range_end:; }","duration":"1.746534049s","start":"2024-09-17T09:10:23.192066Z","end":"2024-09-17T09:10:24.938600Z","steps":["trace[802347161] 'agreement among raft nodes before linearized reading'  (duration: 1.746469617s)"],"step_count":1}
	{"level":"error","ts":"2024-09-17T09:10:24.938646Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: context canceled\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 09:13:10 up 2 min,  0 users,  load average: 1.00, 0.42, 0.15
	Linux ha-857000 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3d889c7c8da7] <==
	I0917 09:12:39.612978       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:12:49.606629       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:12:49.606712       1 main.go:299] handling current node
	I0917 09:12:49.606742       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:12:49.606793       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:12:49.606920       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:12:49.606967       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:12:49.607060       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:12:49.607108       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:12:59.612269       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:12:59.612291       1 main.go:299] handling current node
	I0917 09:12:59.612301       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:12:59.612305       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:12:59.612392       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:12:59.612417       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:12:59.612453       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:12:59.612507       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:13:09.611849       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:13:09.611906       1 main.go:299] handling current node
	I0917 09:13:09.611932       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:13:09.611940       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:13:09.612064       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:13:09.612072       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:13:09.612157       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:13:09.612166       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [5d84a01abd3e] <==
	I0917 09:05:22.964948       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:32.966280       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:32.966503       1 main.go:299] handling current node
	I0917 09:05:32.966605       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:32.966739       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:32.966951       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:32.967059       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:32.967333       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:32.967449       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:42.964585       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:42.964999       1 main.go:299] handling current node
	I0917 09:05:42.965252       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:42.965422       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:42.965746       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:42.965829       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:42.966204       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:42.966357       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:52.965279       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:52.965376       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:52.965533       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:52.965592       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:52.965673       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:52.965753       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:52.965812       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:52.965902       1 main.go:299] handling current node
	
	
	==> kube-apiserver [475dedee3722] <==
	I0917 09:11:36.333360       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 09:11:36.335609       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 09:11:36.383731       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 09:11:36.383763       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 09:11:36.384428       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 09:11:36.385090       1 shared_informer.go:320] Caches are synced for configmaps
	I0917 09:11:36.385168       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0917 09:11:36.385606       1 aggregator.go:171] initial CRD sync complete...
	I0917 09:11:36.385745       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 09:11:36.386077       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 09:11:36.386187       1 cache.go:39] Caches are synced for autoregister controller
	I0917 09:11:36.388938       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 09:11:36.396198       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 09:11:36.396611       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0917 09:11:36.396812       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	W0917 09:11:36.438133       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0917 09:11:36.461867       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0917 09:11:36.465355       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 09:11:36.465387       1 policy_source.go:224] refreshing policies
	I0917 09:11:36.484251       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 09:11:36.540432       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 09:11:36.548136       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0917 09:11:36.554355       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0917 09:11:37.296848       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0917 09:11:37.666999       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	
	
	==> kube-apiserver [a18a6b023cd6] <==
	I0917 09:10:52.375949       1 options.go:228] external host was not specified, using 192.169.0.5
	I0917 09:10:52.377617       1 server.go:142] Version: v1.31.1
	I0917 09:10:52.377684       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:10:52.824178       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0917 09:10:52.824356       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 09:10:52.826684       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0917 09:10:52.828510       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0917 09:10:52.829505       1 instance.go:232] Using reconciler: lease
	W0917 09:11:12.810788       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0917 09:11:12.813364       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0917 09:11:12.831731       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0917 09:11:12.831919       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [13b7f8a93ad4] <==
	I0917 09:10:53.058887       1 serving.go:386] Generated self-signed cert in-memory
	I0917 09:10:53.469010       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0917 09:10:53.469133       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:10:53.478660       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 09:10:53.478827       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 09:10:53.478677       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0917 09:10:53.479256       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0917 09:11:13.838538       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-controller-manager [ca7fe8ccd4c5] <==
	I0917 09:12:17.473758       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="14.760263ms"
	I0917 09:12:17.473945       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="34.651µs"
	I0917 09:12:18.632033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="35.896µs"
	I0917 09:12:18.776005       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.253969ms"
	I0917 09:12:18.776119       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="71.789µs"
	I0917 09:12:18.785648       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="39.503µs"
	I0917 09:12:18.798097       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-f4rqd EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-f4rqd\": the object has been modified; please apply your changes to the latest version and try again"
	I0917 09:12:18.798477       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"fbe3dede-bdc6-453b-baec-6a20140ca1b1", APIVersion:"v1", ResourceVersion:"261", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-f4rqd EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-f4rqd": the object has been modified; please apply your changes to the latest version and try again
	I0917 09:12:19.953163       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m04"
	I0917 09:12:19.967434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m04"
	I0917 09:12:20.682851       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m04"
	I0917 09:12:23.128380       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m04"
	I0917 09:12:25.083746       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m04"
	I0917 09:12:39.721967       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-857000-m04"
	I0917 09:12:39.722197       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m04"
	I0917 09:12:39.733466       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m04"
	I0917 09:12:40.010916       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m04"
	I0917 09:12:57.587381       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-f4rqd EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-f4rqd\": the object has been modified; please apply your changes to the latest version and try again"
	I0917 09:12:57.588538       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"fbe3dede-bdc6-453b-baec-6a20140ca1b1", APIVersion:"v1", ResourceVersion:"261", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-f4rqd EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-f4rqd": the object has been modified; please apply your changes to the latest version and try again
	I0917 09:12:57.619018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="58.719907ms"
	E0917 09:12:57.619070       1 replica_set.go:560] "Unhandled Error" err="sync \"kube-system/coredns-7c65d6cfc9\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-7c65d6cfc9\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0917 09:12:57.620470       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="97.988µs"
	I0917 09:12:57.624100       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-f4rqd EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-f4rqd\": the object has been modified; please apply your changes to the latest version and try again"
	I0917 09:12:57.624538       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"fbe3dede-bdc6-453b-baec-6a20140ca1b1", APIVersion:"v1", ResourceVersion:"261", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-f4rqd EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-f4rqd": the object has been modified; please apply your changes to the latest version and try again
	I0917 09:12:57.625793       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="62.927µs"
	
	
	==> kube-proxy [0b03e5e48893] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 09:00:59.069869       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 09:00:59.079118       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 09:00:59.079199       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 09:00:59.109184       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 09:00:59.109227       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 09:00:59.109245       1 server_linux.go:169] "Using iptables Proxier"
	I0917 09:00:59.111661       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 09:00:59.111847       1 server.go:483] "Version info" version="v1.31.1"
	I0917 09:00:59.111876       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:00:59.112952       1 config.go:199] "Starting service config controller"
	I0917 09:00:59.112979       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 09:00:59.112995       1 config.go:105] "Starting endpoint slice config controller"
	I0917 09:00:59.112998       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 09:00:59.113603       1 config.go:328] "Starting node config controller"
	I0917 09:00:59.113673       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 09:00:59.213587       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 09:00:59.213649       1 shared_informer.go:320] Caches are synced for service config
	I0917 09:00:59.213808       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c37a677e3118] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 09:12:19.054558       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 09:12:19.080090       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 09:12:19.080297       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 09:12:19.208559       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 09:12:19.208589       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 09:12:19.208607       1 server_linux.go:169] "Using iptables Proxier"
	I0917 09:12:19.212603       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 09:12:19.213076       1 server.go:483] "Version info" version="v1.31.1"
	I0917 09:12:19.213105       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:12:19.216919       1 config.go:199] "Starting service config controller"
	I0917 09:12:19.217067       1 config.go:105] "Starting endpoint slice config controller"
	I0917 09:12:19.217988       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 09:12:19.218116       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 09:12:19.228165       1 config.go:328] "Starting node config controller"
	I0917 09:12:19.228196       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 09:12:19.319175       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 09:12:19.319361       1 shared_informer.go:320] Caches are synced for service config
	I0917 09:12:19.328396       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [00ff29c21371] <==
	W0917 09:11:36.373943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 09:11:36.373983       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.374259       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 09:11:36.374300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.376668       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 09:11:36.376725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.376996       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 09:11:36.377204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.377457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 09:11:36.377528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.378762       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 09:11:36.378803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.381567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 09:11:36.381612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.381875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 09:11:36.382055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.382318       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 09:11:36.382353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.382449       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 09:11:36.382484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.382718       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 09:11:36.382767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.382890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 09:11:36.383104       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0917 09:11:36.446439       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d9fae1497b04] <==
	E0917 09:09:54.047035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:01.417081       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:01.417178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:02.586956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:02.587049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:09.339944       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:09.340160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:12.375946       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:12.375997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:14.579545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:14.579979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:18.357149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:18.357192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:19.971293       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:19.971663       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:22.259174       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:22.259229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:24.413900       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:24.413975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	I0917 09:10:24.953479       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 09:10:24.953762       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0917 09:10:24.953909       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	E0917 09:10:24.953957       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	I0917 09:10:24.955052       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 09:10:24.955061       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 09:12:17 ha-857000 kubelet[1572]: E0917 09:12:17.230909    1572 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-vip-ha-857000\" already exists" pod="kube-system/kube-vip-ha-857000"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.239224    1572 apiserver.go:52] "Watching apiserver"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.296247    1572 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.363699    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eecd1421-3a2f-4e48-b2b2-abcbef7869e7-lib-modules\") pod \"kindnet-7pf7v\" (UID: \"eecd1421-3a2f-4e48-b2b2-abcbef7869e7\") " pod="kube-system/kindnet-7pf7v"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.363849    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eecd1421-3a2f-4e48-b2b2-abcbef7869e7-xtables-lock\") pod \"kindnet-7pf7v\" (UID: \"eecd1421-3a2f-4e48-b2b2-abcbef7869e7\") " pod="kube-system/kindnet-7pf7v"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.363896    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eecd1421-3a2f-4e48-b2b2-abcbef7869e7-cni-cfg\") pod \"kindnet-7pf7v\" (UID: \"eecd1421-3a2f-4e48-b2b2-abcbef7869e7\") " pod="kube-system/kindnet-7pf7v"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.363942    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a396757-8954-48d2-b708-dcdfbab21dc7-xtables-lock\") pod \"kube-proxy-vskbj\" (UID: \"7a396757-8954-48d2-b708-dcdfbab21dc7\") " pod="kube-system/kube-proxy-vskbj"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.363979    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a396757-8954-48d2-b708-dcdfbab21dc7-lib-modules\") pod \"kube-proxy-vskbj\" (UID: \"7a396757-8954-48d2-b708-dcdfbab21dc7\") " pod="kube-system/kube-proxy-vskbj"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.364021    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d81e7b55-a14e-4dc7-9193-ebe6914cdacf-tmp\") pod \"storage-provisioner\" (UID: \"d81e7b55-a14e-4dc7-9193-ebe6914cdacf\") " pod="kube-system/storage-provisioner"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.381710    1572 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 17 09:12:18 ha-857000 kubelet[1572]: I0917 09:12:18.732394    1572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc1d198ffe0b277108e5042e64604ceb887f7f17cd5814893fbf789f1ce180f0"
	Sep 17 09:12:18 ha-857000 kubelet[1572]: I0917 09:12:18.754870    1572 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-857000" podUID="84b805d8-9a8f-4c6f-b18f-76c98ca4776c"
	Sep 17 09:12:18 ha-857000 kubelet[1572]: I0917 09:12:18.779039    1572 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-857000"
	Sep 17 09:12:19 ha-857000 kubelet[1572]: I0917 09:12:19.228668    1572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ca8e5543181b6f9996b6d7e435c3947" path="/var/lib/kubelet/pods/3ca8e5543181b6f9996b6d7e435c3947/volumes"
	Sep 17 09:12:19 ha-857000 kubelet[1572]: I0917 09:12:19.846405    1572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-857000" podStartSLOduration=1.846388448 podStartE2EDuration="1.846388448s" podCreationTimestamp="2024-09-17 09:12:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-17 09:12:19.829429782 +0000 UTC m=+94.772487592" watchObservedRunningTime="2024-09-17 09:12:19.846388448 +0000 UTC m=+94.789446258"
	Sep 17 09:12:45 ha-857000 kubelet[1572]: E0917 09:12:45.245854    1572 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 09:12:45 ha-857000 kubelet[1572]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 09:12:45 ha-857000 kubelet[1572]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 09:12:45 ha-857000 kubelet[1572]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 09:12:45 ha-857000 kubelet[1572]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 09:12:45 ha-857000 kubelet[1572]: I0917 09:12:45.363926    1572 scope.go:117] "RemoveContainer" containerID="fcb7038a6ac9ef515ab38df1dab73586ab93030767bab4f0d4d141f34bac886f"
	Sep 17 09:12:49 ha-857000 kubelet[1572]: I0917 09:12:49.092301    1572 scope.go:117] "RemoveContainer" containerID="611759af4bf7a8b48c2739f53afaeba3cb10af70a80bf85bfb78eebe6230c491"
	Sep 17 09:12:49 ha-857000 kubelet[1572]: I0917 09:12:49.092548    1572 scope.go:117] "RemoveContainer" containerID="67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7"
	Sep 17 09:12:49 ha-857000 kubelet[1572]: E0917 09:12:49.092633    1572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d81e7b55-a14e-4dc7-9193-ebe6914cdacf)\"" pod="kube-system/storage-provisioner" podUID="d81e7b55-a14e-4dc7-9193-ebe6914cdacf"
	Sep 17 09:13:00 ha-857000 kubelet[1572]: I0917 09:13:00.226410    1572 scope.go:117] "RemoveContainer" containerID="67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-857000 -n ha-857000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-857000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (4.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-857000 --control-plane -v=7 --alsologtostderr
E0917 02:13:59.157400    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-857000 --control-plane -v=7 --alsologtostderr: (1m19.956933362s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr
ha_test.go:616: status says not all three control-plane nodes are present: args "out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr": ha-857000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-857000-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:619: status says not all four hosts are running: args "out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr": ha-857000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-857000-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:622: status says not all four kubelets are running: args "out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr": ha-857000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-857000-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:625: status says not all three apiservers are running: args "out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr": ha-857000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-857000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-857000-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-857000 -n ha-857000
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-857000 logs -n 25: (3.574493807s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m04 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m03_ha-857000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-857000 cp testdata/cp-test.txt                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3490570692/001/cp-test_ha-857000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000:/home/docker/cp-test_ha-857000-m04_ha-857000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000 sudo cat                                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m02:/home/docker/cp-test_ha-857000-m04_ha-857000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m02 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m03:/home/docker/cp-test_ha-857000-m04_ha-857000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m03 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-857000 node stop m02 -v=7                                                                                                 | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-857000 node start m02 -v=7                                                                                                | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:05 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-857000 -v=7                                                                                                       | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:05 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-857000 -v=7                                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:05 PDT | 17 Sep 24 02:06 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-857000 --wait=true -v=7                                                                                                | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:06 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-857000                                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:07 PDT |                     |
	| node    | ha-857000 node delete m03 -v=7                                                                                               | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:07 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-857000 stop -v=7                                                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:07 PDT | 17 Sep 24 02:10 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-857000 --wait=true                                                                                                     | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:10 PDT | 17 Sep 24 02:13 PDT |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-857000                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:13 PDT | 17 Sep 24 02:14 PDT |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 02:10:27
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 02:10:27.105477    4110 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:10:27.105665    4110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:10:27.105670    4110 out.go:358] Setting ErrFile to fd 2...
	I0917 02:10:27.105674    4110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:10:27.105845    4110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:10:27.107332    4110 out.go:352] Setting JSON to false
	I0917 02:10:27.130053    4110 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2397,"bootTime":1726561830,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 02:10:27.130205    4110 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:10:27.152188    4110 out.go:177] * [ha-857000] minikube v1.34.0 on Darwin 14.6.1
	I0917 02:10:27.194040    4110 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:10:27.194117    4110 notify.go:220] Checking for updates...
	I0917 02:10:27.238575    4110 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:10:27.259736    4110 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 02:10:27.280930    4110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:10:27.301762    4110 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:10:27.322633    4110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:10:27.344421    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:10:27.344920    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:27.344973    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:27.354413    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52088
	I0917 02:10:27.354771    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:27.355142    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:10:27.355153    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:27.355356    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:27.355460    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:27.355684    4110 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:10:27.355976    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:27.356005    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:27.364420    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52090
	I0917 02:10:27.364811    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:27.365167    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:10:27.365180    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:27.365391    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:27.365504    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:27.393706    4110 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 02:10:27.435894    4110 start.go:297] selected driver: hyperkit
	I0917 02:10:27.435922    4110 start.go:901] validating driver "hyperkit" against &{Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:10:27.436195    4110 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:10:27.436329    4110 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:10:27.436542    4110 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19648-1025/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 02:10:27.445831    4110 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 02:10:27.449537    4110 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:27.449556    4110 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 02:10:27.452252    4110 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:10:27.452291    4110 cni.go:84] Creating CNI manager for ""
	I0917 02:10:27.452327    4110 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 02:10:27.452403    4110 start.go:340] cluster config:
	{Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:10:27.452523    4110 iso.go:125] acquiring lock: {Name:mkc407d80a0f8e78f0e24d63c464bb315c80139b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:10:27.494874    4110 out.go:177] * Starting "ha-857000" primary control-plane node in "ha-857000" cluster
	I0917 02:10:27.515806    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:10:27.515897    4110 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 02:10:27.515918    4110 cache.go:56] Caching tarball of preloaded images
	I0917 02:10:27.516138    4110 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:10:27.516158    4110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:10:27.516383    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:27.517269    4110 start.go:360] acquireMachinesLock for ha-857000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:10:27.517388    4110 start.go:364] duration metric: took 96.177µs to acquireMachinesLock for "ha-857000"
	I0917 02:10:27.517441    4110 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:10:27.517460    4110 fix.go:54] fixHost starting: 
	I0917 02:10:27.517898    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:27.517930    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:27.526784    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52092
	I0917 02:10:27.527129    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:27.527462    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:10:27.527473    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:27.527739    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:27.527880    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:27.527995    4110 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:10:27.528094    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:27.528210    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 3964
	I0917 02:10:27.529100    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid 3964 missing from process table
	I0917 02:10:27.529122    4110 fix.go:112] recreateIfNeeded on ha-857000: state=Stopped err=<nil>
	I0917 02:10:27.529141    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	W0917 02:10:27.529225    4110 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:10:27.570570    4110 out.go:177] * Restarting existing hyperkit VM for "ha-857000" ...
	I0917 02:10:27.591801    4110 main.go:141] libmachine: (ha-857000) Calling .Start
	I0917 02:10:27.592089    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:27.592131    4110 main.go:141] libmachine: (ha-857000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid
	I0917 02:10:27.592193    4110 main.go:141] libmachine: (ha-857000) DBG | Using UUID f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c
	I0917 02:10:27.699994    4110 main.go:141] libmachine: (ha-857000) DBG | Generated MAC c2:63:2b:63:80:76
	I0917 02:10:27.700019    4110 main.go:141] libmachine: (ha-857000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:10:27.700136    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b6e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:10:27.700165    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b6e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:10:27.700210    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/ha-857000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:10:27.700256    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/ha-857000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:10:27.700270    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:10:27.701709    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: Pid is 4124
	I0917 02:10:27.702059    4110 main.go:141] libmachine: (ha-857000) DBG | Attempt 0
	I0917 02:10:27.702070    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:27.702132    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 4124
	I0917 02:10:27.703343    4110 main.go:141] libmachine: (ha-857000) DBG | Searching for c2:63:2b:63:80:76 in /var/db/dhcpd_leases ...
	I0917 02:10:27.703398    4110 main.go:141] libmachine: (ha-857000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:10:27.703416    4110 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66e94781}
	I0917 02:10:27.703422    4110 main.go:141] libmachine: (ha-857000) DBG | Found match: c2:63:2b:63:80:76
	I0917 02:10:27.703434    4110 main.go:141] libmachine: (ha-857000) DBG | IP: 192.169.0.5
	I0917 02:10:27.703500    4110 main.go:141] libmachine: (ha-857000) Calling .GetConfigRaw
	I0917 02:10:27.704135    4110 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:10:27.704313    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:27.704745    4110 machine.go:93] provisionDockerMachine start ...
	I0917 02:10:27.704755    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:27.704862    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:27.704967    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:27.705062    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:27.705172    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:27.705289    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:27.705426    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:27.705645    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:27.705655    4110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:10:27.709824    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:10:27.761328    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:10:27.762023    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:10:27.762037    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:10:27.762058    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:10:27.762068    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:10:28.142704    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:10:28.142720    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:10:28.257454    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:10:28.257477    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:10:28.257500    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:10:28.257510    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:10:28.258332    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:10:28.258356    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:10:33.845455    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:33 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:10:33.845506    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:33 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:10:33.845516    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:33 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:10:33.869458    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:33 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:10:38.774269    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:10:38.774287    4110 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:10:38.774460    4110 buildroot.go:166] provisioning hostname "ha-857000"
	I0917 02:10:38.774470    4110 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:10:38.774556    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:38.774689    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:38.774787    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.774865    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.774959    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:38.775097    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:38.775254    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:38.775262    4110 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000 && echo "ha-857000" | sudo tee /etc/hostname
	I0917 02:10:38.842954    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000
	
	I0917 02:10:38.842972    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:38.843114    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:38.843224    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.843309    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.843398    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:38.843557    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:38.843701    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:38.843712    4110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:10:38.908790    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:10:38.908811    4110 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:10:38.908824    4110 buildroot.go:174] setting up certificates
	I0917 02:10:38.908830    4110 provision.go:84] configureAuth start
	I0917 02:10:38.908845    4110 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:10:38.908979    4110 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:10:38.909073    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:38.909177    4110 provision.go:143] copyHostCerts
	I0917 02:10:38.909208    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:10:38.909278    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:10:38.909287    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:10:38.909606    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:10:38.909812    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:10:38.909853    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:10:38.909857    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:10:38.909935    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:10:38.910085    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:10:38.910127    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:10:38.910132    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:10:38.910214    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:10:38.910362    4110 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000 san=[127.0.0.1 192.169.0.5 ha-857000 localhost minikube]
	I0917 02:10:38.962566    4110 provision.go:177] copyRemoteCerts
	I0917 02:10:38.962618    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:10:38.962632    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:38.962737    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:38.962836    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.962932    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:38.963020    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:38.998776    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:10:38.998851    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:10:39.018683    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:10:39.018741    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0917 02:10:39.038754    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:10:39.038814    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:10:39.058064    4110 provision.go:87] duration metric: took 149.217348ms to configureAuth
	I0917 02:10:39.058076    4110 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:10:39.058257    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:10:39.058270    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:39.058416    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:39.058513    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:39.058598    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.058689    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.058780    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:39.058915    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:39.059035    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:39.059042    4110 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:10:39.117847    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:10:39.117859    4110 buildroot.go:70] root file system type: tmpfs
	I0917 02:10:39.117937    4110 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:10:39.117952    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:39.118078    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:39.118171    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.118258    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.118338    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:39.118469    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:39.118616    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:39.118663    4110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:10:39.186097    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:10:39.186120    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:39.186247    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:39.186347    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.186426    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.186527    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:39.186659    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:39.186806    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:39.186817    4110 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:10:40.814202    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:10:40.814217    4110 machine.go:96] duration metric: took 13.109237782s to provisionDockerMachine
	I0917 02:10:40.814229    4110 start.go:293] postStartSetup for "ha-857000" (driver="hyperkit")
	I0917 02:10:40.814236    4110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:10:40.814246    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:40.814438    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:10:40.814456    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:40.814571    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:40.814667    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:40.814762    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:40.814848    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:40.854204    4110 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:10:40.857656    4110 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:10:40.857668    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:10:40.857773    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:10:40.857955    4110 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:10:40.857962    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:10:40.858166    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:10:40.867201    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:10:40.895727    4110 start.go:296] duration metric: took 81.487995ms for postStartSetup
	I0917 02:10:40.895754    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:40.895937    4110 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:10:40.895964    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:40.896062    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:40.896140    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:40.896211    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:40.896292    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:40.931812    4110 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:10:40.931872    4110 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:10:40.965671    4110 fix.go:56] duration metric: took 13.447980679s for fixHost
	I0917 02:10:40.965693    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:40.965831    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:40.965924    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:40.966013    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:40.966122    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:40.966261    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:40.966403    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:40.966410    4110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:10:41.023835    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726564240.935930388
	
	I0917 02:10:41.023847    4110 fix.go:216] guest clock: 1726564240.935930388
	I0917 02:10:41.023853    4110 fix.go:229] Guest: 2024-09-17 02:10:40.935930388 -0700 PDT Remote: 2024-09-17 02:10:40.965683 -0700 PDT m=+13.896006994 (delta=-29.752612ms)
	I0917 02:10:41.023870    4110 fix.go:200] guest clock delta is within tolerance: -29.752612ms
	I0917 02:10:41.023873    4110 start.go:83] releasing machines lock for "ha-857000", held for 13.506240986s
	I0917 02:10:41.023893    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:41.024017    4110 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:10:41.024124    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:41.024416    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:41.024496    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:41.024577    4110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:10:41.024607    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:41.024622    4110 ssh_runner.go:195] Run: cat /version.json
	I0917 02:10:41.024633    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:41.024692    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:41.024731    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:41.024799    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:41.024812    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:41.024882    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:41.024908    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:41.025002    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:41.025031    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:41.057444    4110 ssh_runner.go:195] Run: systemctl --version
	I0917 02:10:41.119261    4110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 02:10:41.123760    4110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:10:41.123809    4110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:10:41.136297    4110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:10:41.136307    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:10:41.136412    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:10:41.153182    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:10:41.162387    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:10:41.171363    4110 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:10:41.171411    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:10:41.180339    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:10:41.189205    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:10:41.198331    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:10:41.207214    4110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:10:41.216288    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:10:41.225185    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:10:41.234170    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:10:41.243192    4110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:10:41.251363    4110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:10:41.259648    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:41.359254    4110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:10:41.378053    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:10:41.378144    4110 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:10:41.391608    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:10:41.406431    4110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:10:41.426598    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:10:41.437654    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:10:41.448507    4110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:10:41.470118    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:10:41.481632    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:10:41.496609    4110 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:10:41.499690    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:10:41.507723    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:10:41.520894    4110 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:10:41.633690    4110 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:10:41.735063    4110 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:10:41.735129    4110 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:10:41.749181    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:41.842846    4110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:10:44.137188    4110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.294283491s)
	I0917 02:10:44.137256    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:10:44.147554    4110 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:10:44.160480    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:10:44.170998    4110 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:10:44.262329    4110 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:10:44.355414    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:44.456404    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:10:44.470268    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:10:44.481488    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:44.585298    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:10:44.651024    4110 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:10:44.651127    4110 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:10:44.655468    4110 start.go:563] Will wait 60s for crictl version
	I0917 02:10:44.655523    4110 ssh_runner.go:195] Run: which crictl
	I0917 02:10:44.660816    4110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:10:44.685805    4110 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:10:44.685900    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:10:44.701620    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:10:44.762577    4110 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:10:44.762643    4110 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:10:44.763055    4110 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:10:44.767764    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:10:44.778676    4110 kubeadm.go:883] updating cluster {Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 02:10:44.778770    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:10:44.778845    4110 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:10:44.792490    4110 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:10:44.792502    4110 docker.go:615] Images already preloaded, skipping extraction
	I0917 02:10:44.792587    4110 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:10:44.806122    4110 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:10:44.806141    4110 cache_images.go:84] Images are preloaded, skipping loading
	I0917 02:10:44.806152    4110 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0917 02:10:44.806226    4110 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:10:44.806308    4110 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 02:10:44.838425    4110 cni.go:84] Creating CNI manager for ""
	I0917 02:10:44.838438    4110 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 02:10:44.838451    4110 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 02:10:44.838467    4110 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-857000 NodeName:ha-857000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 02:10:44.838548    4110 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-857000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 02:10:44.838565    4110 kube-vip.go:115] generating kube-vip config ...
	I0917 02:10:44.838624    4110 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 02:10:44.852006    4110 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 02:10:44.852072    4110 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 02:10:44.852126    4110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:10:44.861875    4110 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:10:44.861926    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 02:10:44.870065    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0917 02:10:44.883323    4110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:10:44.896671    4110 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0917 02:10:44.910190    4110 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 02:10:44.923776    4110 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:10:44.926683    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:10:44.936751    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:45.031050    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:10:45.045803    4110 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.5
	I0917 02:10:45.045815    4110 certs.go:194] generating shared ca certs ...
	I0917 02:10:45.045826    4110 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:10:45.046013    4110 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:10:45.046090    4110 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:10:45.046101    4110 certs.go:256] generating profile certs ...
	I0917 02:10:45.046208    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key
	I0917 02:10:45.046290    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea
	I0917 02:10:45.046357    4110 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key
	I0917 02:10:45.046364    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:10:45.046385    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:10:45.046406    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:10:45.046424    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:10:45.046442    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 02:10:45.046474    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 02:10:45.046503    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 02:10:45.046520    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 02:10:45.046624    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:10:45.046679    4110 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:10:45.046688    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:10:45.046749    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:10:45.046790    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:10:45.046829    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:10:45.046908    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:10:45.046945    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:10:45.046966    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:10:45.046984    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:10:45.047483    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:10:45.080356    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:10:45.112920    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:10:45.138450    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:10:45.175252    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 02:10:45.218044    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 02:10:45.251977    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:10:45.309085    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 02:10:45.353596    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:10:45.384476    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:10:45.404778    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:10:45.423525    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 02:10:45.437207    4110 ssh_runner.go:195] Run: openssl version
	I0917 02:10:45.441704    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:10:45.450346    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:10:45.453899    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:10:45.453945    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:10:45.458361    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:10:45.466854    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:10:45.475379    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:10:45.478924    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:10:45.478963    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:10:45.483279    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:10:45.491638    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:10:45.500375    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:10:45.504070    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:10:45.504128    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:10:45.508583    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:10:45.516977    4110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:10:45.520582    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:10:45.524889    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:10:45.529282    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:10:45.533668    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:10:45.538022    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:10:45.542262    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:10:45.546447    4110 kubeadm.go:392] StartCluster: {Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:10:45.546579    4110 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 02:10:45.558935    4110 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 02:10:45.566714    4110 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 02:10:45.566724    4110 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 02:10:45.566760    4110 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 02:10:45.574257    4110 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:10:45.574553    4110 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-857000" does not appear in /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:10:45.574638    4110 kubeconfig.go:62] /Users/jenkins/minikube-integration/19648-1025/kubeconfig needs updating (will repair): [kubeconfig missing "ha-857000" cluster setting kubeconfig missing "ha-857000" context setting]
	I0917 02:10:45.574818    4110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:10:45.575437    4110 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:10:45.575640    4110 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x673a720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:10:45.575954    4110 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 02:10:45.576155    4110 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 02:10:45.583535    4110 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0917 02:10:45.583548    4110 kubeadm.go:597] duration metric: took 16.820219ms to restartPrimaryControlPlane
	I0917 02:10:45.583553    4110 kubeadm.go:394] duration metric: took 37.114772ms to StartCluster
	I0917 02:10:45.583562    4110 settings.go:142] acquiring lock: {Name:mk5de70c670a23cf49fdbd19ddb5c1d2a9de9a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:10:45.583637    4110 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:10:45.584029    4110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:10:45.584244    4110 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:10:45.584257    4110 start.go:241] waiting for startup goroutines ...
	I0917 02:10:45.584290    4110 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 02:10:45.584399    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:10:45.629290    4110 out.go:177] * Enabled addons: 
	I0917 02:10:45.650483    4110 addons.go:510] duration metric: took 66.114939ms for enable addons: enabled=[]
	I0917 02:10:45.650526    4110 start.go:246] waiting for cluster config update ...
	I0917 02:10:45.650541    4110 start.go:255] writing updated cluster config ...
	I0917 02:10:45.672110    4110 out.go:201] 
	I0917 02:10:45.693671    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:10:45.693812    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:45.716376    4110 out.go:177] * Starting "ha-857000-m02" control-plane node in "ha-857000" cluster
	I0917 02:10:45.758138    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:10:45.758205    4110 cache.go:56] Caching tarball of preloaded images
	I0917 02:10:45.758422    4110 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:10:45.758440    4110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:10:45.758566    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:45.759523    4110 start.go:360] acquireMachinesLock for ha-857000-m02: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:10:45.759643    4110 start.go:364] duration metric: took 94.526µs to acquireMachinesLock for "ha-857000-m02"
	I0917 02:10:45.759684    4110 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:10:45.759694    4110 fix.go:54] fixHost starting: m02
	I0917 02:10:45.760135    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:45.760170    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:45.769422    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52114
	I0917 02:10:45.769778    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:45.770120    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:10:45.770130    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:45.770332    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:45.770446    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:10:45.770540    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetState
	I0917 02:10:45.770620    4110 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:45.770696    4110 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 3976
	I0917 02:10:45.771617    4110 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid 3976 missing from process table
	I0917 02:10:45.771641    4110 fix.go:112] recreateIfNeeded on ha-857000-m02: state=Stopped err=<nil>
	I0917 02:10:45.771648    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	W0917 02:10:45.771734    4110 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:10:45.793214    4110 out.go:177] * Restarting existing hyperkit VM for "ha-857000-m02" ...
	I0917 02:10:45.835194    4110 main.go:141] libmachine: (ha-857000-m02) Calling .Start
	I0917 02:10:45.835422    4110 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:45.835478    4110 main.go:141] libmachine: (ha-857000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid
	I0917 02:10:45.836481    4110 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid 3976 missing from process table
	I0917 02:10:45.836493    4110 main.go:141] libmachine: (ha-857000-m02) DBG | pid 3976 is in state "Stopped"
	I0917 02:10:45.836506    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid...
	I0917 02:10:45.836730    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Using UUID 1940a045-0b1b-4b28-842d-d4858a62cbd3
	I0917 02:10:45.862461    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Generated MAC 9a:95:4e:4b:65:fe
	I0917 02:10:45.862487    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:10:45.862599    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1940a045-0b1b-4b28-842d-d4858a62cbd3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:10:45.862645    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1940a045-0b1b-4b28-842d-d4858a62cbd3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:10:45.862683    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1940a045-0b1b-4b28-842d-d4858a62cbd3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/ha-857000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machine
s/ha-857000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:10:45.862720    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1940a045-0b1b-4b28-842d-d4858a62cbd3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/ha-857000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:10:45.862741    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:10:45.864138    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: Pid is 4131
	I0917 02:10:45.864563    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Attempt 0
	I0917 02:10:45.864573    4110 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:45.864635    4110 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 4131
	I0917 02:10:45.866426    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Searching for 9a:95:4e:4b:65:fe in /var/db/dhcpd_leases ...
	I0917 02:10:45.866511    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:10:45.866527    4110 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:10:45.866546    4110 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea9817}
	I0917 02:10:45.866556    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Found match: 9a:95:4e:4b:65:fe
	I0917 02:10:45.866585    4110 main.go:141] libmachine: (ha-857000-m02) DBG | IP: 192.169.0.6
	I0917 02:10:45.866617    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetConfigRaw
	I0917 02:10:45.867379    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:10:45.867624    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:45.868172    4110 machine.go:93] provisionDockerMachine start ...
	I0917 02:10:45.868192    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:10:45.868319    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:10:45.868433    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:10:45.868540    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:10:45.868629    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:10:45.868743    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:10:45.868892    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:45.869038    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:10:45.869047    4110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:10:45.871979    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:10:45.880237    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:10:45.881261    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:10:45.881280    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:10:45.881317    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:10:45.881331    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:10:46.263104    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:10:46.263119    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:10:46.377844    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:10:46.377864    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:10:46.377874    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:10:46.377890    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:10:46.378727    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:10:46.378736    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:10:51.977750    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:51 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 02:10:51.977833    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:51 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 02:10:51.977841    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:51 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 02:10:52.002295    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:52 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 02:11:20.931384    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:11:20.931398    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:11:20.931549    4110 buildroot.go:166] provisioning hostname "ha-857000-m02"
	I0917 02:11:20.931560    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:11:20.931664    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:20.931762    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:20.931855    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:20.931937    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:20.932033    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:20.932169    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:20.932351    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:20.932359    4110 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000-m02 && echo "ha-857000-m02" | sudo tee /etc/hostname
	I0917 02:11:20.993183    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000-m02
	
	I0917 02:11:20.993198    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:20.993326    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:20.993440    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:20.993526    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:20.993618    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:20.993763    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:20.993914    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:20.993925    4110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:11:21.050925    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:11:21.050951    4110 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:11:21.050960    4110 buildroot.go:174] setting up certificates
	I0917 02:11:21.050966    4110 provision.go:84] configureAuth start
	I0917 02:11:21.050972    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:11:21.051109    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:11:21.051192    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.051304    4110 provision.go:143] copyHostCerts
	I0917 02:11:21.051330    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:11:21.051388    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:11:21.051394    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:11:21.051551    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:11:21.051732    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:11:21.051778    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:11:21.051784    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:11:21.051862    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:11:21.051999    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:11:21.052037    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:11:21.052041    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:11:21.052127    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:11:21.052261    4110 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000-m02 san=[127.0.0.1 192.169.0.6 ha-857000-m02 localhost minikube]
	I0917 02:11:21.131473    4110 provision.go:177] copyRemoteCerts
	I0917 02:11:21.131534    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:11:21.131551    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.131683    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:21.131772    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.131866    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:21.131988    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:11:21.165457    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:11:21.165530    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:11:21.185353    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:11:21.185424    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 02:11:21.204885    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:11:21.204944    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 02:11:21.224555    4110 provision.go:87] duration metric: took 173.578725ms to configureAuth
	I0917 02:11:21.224572    4110 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:11:21.224752    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:21.224765    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:21.224898    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.224985    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:21.225071    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.225151    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.225226    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:21.225334    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:21.225453    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:21.225471    4110 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:11:21.276594    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:11:21.276610    4110 buildroot.go:70] root file system type: tmpfs
	I0917 02:11:21.276682    4110 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:11:21.276692    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.276824    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:21.276911    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.276982    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.277068    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:21.277206    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:21.277343    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:21.277390    4110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:11:21.338440    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:11:21.338457    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.338602    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:21.338693    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.338786    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.338878    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:21.339018    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:21.339165    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:21.339180    4110 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:11:23.000541    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:11:23.000557    4110 machine.go:96] duration metric: took 37.131734761s to provisionDockerMachine
	I0917 02:11:23.000565    4110 start.go:293] postStartSetup for "ha-857000-m02" (driver="hyperkit")
	I0917 02:11:23.000572    4110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:11:23.000581    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.000771    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:11:23.000784    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.000877    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.000970    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.001060    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.001151    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:11:23.034070    4110 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:11:23.037044    4110 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:11:23.037054    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:11:23.037149    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:11:23.037326    4110 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:11:23.037333    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:11:23.037542    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:11:23.045540    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:11:23.064134    4110 start.go:296] duration metric: took 63.560241ms for postStartSetup
	I0917 02:11:23.064153    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.064355    4110 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:11:23.064367    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.064443    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.064537    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.064625    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.064699    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:11:23.096648    4110 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:11:23.096719    4110 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:11:23.150750    4110 fix.go:56] duration metric: took 37.39040777s for fixHost
	I0917 02:11:23.150781    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.150933    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.151043    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.151139    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.151225    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.151344    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:23.151480    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:23.151487    4110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:11:23.205108    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726564282.931256187
	
	I0917 02:11:23.205121    4110 fix.go:216] guest clock: 1726564282.931256187
	I0917 02:11:23.205126    4110 fix.go:229] Guest: 2024-09-17 02:11:22.931256187 -0700 PDT Remote: 2024-09-17 02:11:23.150765 -0700 PDT m=+56.080359699 (delta=-219.508813ms)
	I0917 02:11:23.205134    4110 fix.go:200] guest clock delta is within tolerance: -219.508813ms
	I0917 02:11:23.205138    4110 start.go:83] releasing machines lock for "ha-857000-m02", held for 37.444836088s
	I0917 02:11:23.205153    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.205283    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:11:23.226836    4110 out.go:177] * Found network options:
	I0917 02:11:23.247780    4110 out.go:177]   - NO_PROXY=192.169.0.5
	W0917 02:11:23.268466    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:23.268508    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.269341    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.269597    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.269778    4110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0917 02:11:23.269794    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:23.269828    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.269896    4110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:11:23.269915    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.270129    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.270153    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.270351    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.270407    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.270526    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.270571    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.270741    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:11:23.270760    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	W0917 02:11:23.355936    4110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:11:23.356046    4110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:11:23.371785    4110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:11:23.371805    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:11:23.371897    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:11:23.389343    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:11:23.397507    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:11:23.405706    4110 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:11:23.405760    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:11:23.413954    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:11:23.422064    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:11:23.430077    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:11:23.438247    4110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:11:23.446615    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:11:23.455025    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:11:23.463904    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:11:23.472877    4110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:11:23.480886    4110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:11:23.488979    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:23.586431    4110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:11:23.605512    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:11:23.605590    4110 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:11:23.619031    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:11:23.632481    4110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:11:23.650301    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:11:23.661034    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:11:23.671499    4110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:11:23.693809    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:11:23.704324    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:11:23.719425    4110 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:11:23.722279    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:11:23.729409    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:11:23.743121    4110 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:11:23.848749    4110 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:11:23.947630    4110 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:11:23.947661    4110 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:11:23.965207    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:24.060164    4110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:11:26.333778    4110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.273556023s)
	I0917 02:11:26.333847    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:11:26.345198    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:11:26.355965    4110 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:11:26.461793    4110 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:11:26.556361    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:26.674366    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:11:26.687753    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:11:26.697698    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:26.797118    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:11:26.861306    4110 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:11:26.861392    4110 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:11:26.865857    4110 start.go:563] Will wait 60s for crictl version
	I0917 02:11:26.865915    4110 ssh_runner.go:195] Run: which crictl
	I0917 02:11:26.869732    4110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:11:26.894886    4110 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:11:26.894999    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:11:26.911893    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:11:26.950833    4110 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:11:26.972458    4110 out.go:177]   - env NO_PROXY=192.169.0.5
	I0917 02:11:26.993284    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:11:26.993711    4110 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:11:26.998329    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:11:27.008512    4110 mustload.go:65] Loading cluster: ha-857000
	I0917 02:11:27.008684    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:27.008920    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:11:27.008943    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:11:27.017607    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52136
	I0917 02:11:27.017941    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:11:27.018292    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:11:27.018310    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:11:27.018503    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:11:27.018620    4110 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:11:27.018699    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:11:27.018771    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 4124
	I0917 02:11:27.019715    4110 host.go:66] Checking if "ha-857000" exists ...
	I0917 02:11:27.019989    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:11:27.020015    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:11:27.028562    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52138
	I0917 02:11:27.028902    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:11:27.029241    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:11:27.029257    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:11:27.029461    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:11:27.029566    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:11:27.029665    4110 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.6
	I0917 02:11:27.029672    4110 certs.go:194] generating shared ca certs ...
	I0917 02:11:27.029680    4110 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:11:27.029857    4110 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:11:27.029930    4110 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:11:27.029938    4110 certs.go:256] generating profile certs ...
	I0917 02:11:27.030058    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key
	I0917 02:11:27.030140    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.d3e75930
	I0917 02:11:27.030214    4110 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key
	I0917 02:11:27.030221    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:11:27.030242    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:11:27.030266    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:11:27.030285    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:11:27.030303    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 02:11:27.030337    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 02:11:27.030366    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 02:11:27.030389    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 02:11:27.030486    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:11:27.030540    4110 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:11:27.030549    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:11:27.030587    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:11:27.030621    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:11:27.030651    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:11:27.030716    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:11:27.030753    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:11:27.030774    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:11:27.030792    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:11:27.030816    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:11:27.030911    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:11:27.031000    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:11:27.031078    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:11:27.031162    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:11:27.058778    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 02:11:27.062313    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 02:11:27.070939    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 02:11:27.074280    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 02:11:27.083003    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 02:11:27.086057    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 02:11:27.094554    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 02:11:27.097659    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 02:11:27.106657    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 02:11:27.109894    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 02:11:27.118370    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 02:11:27.121478    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 02:11:27.130386    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:11:27.150256    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:11:27.169526    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:11:27.188769    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:11:27.207966    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 02:11:27.227067    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 02:11:27.246289    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:11:27.265271    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 02:11:27.284669    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:11:27.303761    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:11:27.323113    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:11:27.342331    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 02:11:27.355765    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 02:11:27.369277    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 02:11:27.382837    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 02:11:27.396474    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 02:11:27.410313    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 02:11:27.423731    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 02:11:27.437366    4110 ssh_runner.go:195] Run: openssl version
	I0917 02:11:27.441447    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:11:27.450619    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:11:27.453941    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:11:27.453997    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:11:27.458171    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:11:27.467199    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:11:27.476144    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:11:27.479431    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:11:27.479473    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:11:27.483603    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:11:27.492580    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:11:27.501517    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:11:27.504871    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:11:27.504915    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:11:27.509027    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:11:27.517892    4110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:11:27.521155    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:11:27.525378    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:11:27.529633    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:11:27.533810    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:11:27.538003    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:11:27.542137    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:11:27.546288    4110 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.1 docker true true} ...
	I0917 02:11:27.546336    4110 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:11:27.546350    4110 kube-vip.go:115] generating kube-vip config ...
	I0917 02:11:27.546384    4110 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 02:11:27.558948    4110 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 02:11:27.558990    4110 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 02:11:27.559048    4110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:11:27.568292    4110 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:11:27.568351    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 02:11:27.577686    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 02:11:27.591394    4110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:11:27.604835    4110 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 02:11:27.618390    4110 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:11:27.621271    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:11:27.630851    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:27.729065    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:11:27.743762    4110 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:11:27.743972    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:27.765105    4110 out.go:177] * Verifying Kubernetes components...
	I0917 02:11:27.805899    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:27.933521    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:11:27.948089    4110 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:11:27.948282    4110 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x673a720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 02:11:27.948321    4110 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0917 02:11:27.948495    4110 node_ready.go:35] waiting up to 6m0s for node "ha-857000-m02" to be "Ready" ...
	I0917 02:11:27.948579    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:27.948584    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:27.948591    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:27.948595    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:28.948736    4110 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0917 02:11:28.948861    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:28.948870    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:28.948878    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:28.948882    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.256443    4110 round_trippers.go:574] Response Status: 200 OK in 7307 milliseconds
	I0917 02:11:36.257038    4110 node_ready.go:49] node "ha-857000-m02" has status "Ready":"True"
	I0917 02:11:36.257051    4110 node_ready.go:38] duration metric: took 8.308394835s for node "ha-857000-m02" to be "Ready" ...
	I0917 02:11:36.257061    4110 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:11:36.257098    4110 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 02:11:36.257107    4110 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 02:11:36.257147    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:36.257152    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.257158    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.257164    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.271996    4110 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0917 02:11:36.280676    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.280736    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:11:36.280742    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.280752    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.280756    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.307985    4110 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0917 02:11:36.308476    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.308484    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.308491    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.308501    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.312984    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:11:36.313392    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.313402    4110 pod_ready.go:82] duration metric: took 32.709315ms for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.313409    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.313452    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nl5j5
	I0917 02:11:36.313457    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.313463    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.313468    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.319771    4110 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 02:11:36.320384    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.320393    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.320400    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.320403    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.322816    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:36.323378    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.323388    4110 pod_ready.go:82] duration metric: took 9.97387ms for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.323395    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.323435    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000
	I0917 02:11:36.323440    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.323446    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.323450    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.327486    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:11:36.328047    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.328054    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.328060    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.328063    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.331571    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:36.332110    4110 pod_ready.go:93] pod "etcd-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.332121    4110 pod_ready.go:82] duration metric: took 8.720083ms for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.332128    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.332168    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m02
	I0917 02:11:36.332173    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.332179    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.332184    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.336324    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:11:36.336846    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:36.336854    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.336860    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.336864    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.340608    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:36.341048    4110 pod_ready.go:93] pod "etcd-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.341057    4110 pod_ready.go:82] duration metric: took 8.92351ms for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.341064    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.341104    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m03
	I0917 02:11:36.341110    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.341116    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.341121    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.343462    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:36.458248    4110 request.go:632] Waited for 114.333049ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:36.458307    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:36.458312    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.458318    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.458326    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.466021    4110 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 02:11:36.466526    4110 pod_ready.go:93] pod "etcd-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.466536    4110 pod_ready.go:82] duration metric: took 125.46489ms for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.466548    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.657514    4110 request.go:632] Waited for 190.921312ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:11:36.657567    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:11:36.657574    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.657579    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.657584    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.659804    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:36.857671    4110 request.go:632] Waited for 197.395211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.857701    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.857705    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.857711    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.857715    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.861065    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:36.861653    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.861669    4110 pod_ready.go:82] duration metric: took 395.104039ms for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.861677    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.057332    4110 request.go:632] Waited for 195.603008ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:11:37.057382    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:11:37.057387    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.057393    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.057398    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.060216    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:37.258671    4110 request.go:632] Waited for 197.954534ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:37.258706    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:37.258713    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.258721    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.258727    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.267718    4110 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 02:11:37.268069    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:37.268082    4110 pod_ready.go:82] duration metric: took 406.392892ms for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.268090    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.457925    4110 request.go:632] Waited for 189.791882ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:11:37.457975    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:11:37.457980    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.457987    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.457992    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.461663    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:37.658806    4110 request.go:632] Waited for 196.487027ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:37.658861    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:37.658867    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.658874    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.658878    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.661429    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:37.661888    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:37.661897    4110 pod_ready.go:82] duration metric: took 393.794602ms for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.661905    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.857414    4110 request.go:632] Waited for 195.469923ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:11:37.857468    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:11:37.857474    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.857481    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.857486    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.860019    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.057880    4110 request.go:632] Waited for 197.333642ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:38.057910    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:38.057915    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.057922    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.057927    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.060540    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.061091    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:38.061101    4110 pod_ready.go:82] duration metric: took 399.184022ms for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:38.061109    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:38.257757    4110 request.go:632] Waited for 196.608954ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:11:38.257857    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:11:38.257871    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.257877    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.257882    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.259904    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.458082    4110 request.go:632] Waited for 197.709678ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:38.458138    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:38.458147    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.458154    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.458158    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.460347    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.460715    4110 pod_ready.go:98] node "ha-857000-m02" hosting pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:38.460726    4110 pod_ready.go:82] duration metric: took 399.604676ms for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	E0917 02:11:38.460732    4110 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-857000-m02" hosting pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:38.460739    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:38.658188    4110 request.go:632] Waited for 197.403717ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:11:38.658255    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:11:38.658261    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.658267    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.658271    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.660934    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.857786    4110 request.go:632] Waited for 196.168284ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:38.857841    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:38.857851    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.857863    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.857873    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.861470    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:38.861751    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:38.861759    4110 pod_ready.go:82] duration metric: took 401.003253ms for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:38.861766    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.057800    4110 request.go:632] Waited for 195.986319ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:11:39.057882    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:11:39.057893    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.057904    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.057912    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.061639    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:39.257697    4110 request.go:632] Waited for 195.312452ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:11:39.257726    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:11:39.257731    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.257737    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.257741    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.260209    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:39.260462    4110 pod_ready.go:93] pod "kube-proxy-528ht" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:39.260471    4110 pod_ready.go:82] duration metric: took 398.692905ms for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.260478    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.459321    4110 request.go:632] Waited for 198.788481ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:11:39.459387    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:11:39.459394    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.459411    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.459422    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.461885    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:39.657441    4110 request.go:632] Waited for 195.121107ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:39.657541    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:39.657551    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.657579    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.657585    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.661441    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:39.661929    4110 pod_ready.go:93] pod "kube-proxy-g9wxm" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:39.661942    4110 pod_ready.go:82] duration metric: took 401.451734ms for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.661951    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.857721    4110 request.go:632] Waited for 195.727193ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:11:39.857785    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:11:39.857791    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.857797    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.857802    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.859663    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:11:40.058574    4110 request.go:632] Waited for 198.443343ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:40.058668    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:40.058679    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.058690    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.058699    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.062499    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:40.063124    4110 pod_ready.go:93] pod "kube-proxy-vskbj" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:40.063133    4110 pod_ready.go:82] duration metric: took 401.170349ms for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:40.063140    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:40.257873    4110 request.go:632] Waited for 194.653928ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:11:40.257927    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:11:40.257937    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.257948    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.257956    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.262255    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:11:40.458287    4110 request.go:632] Waited for 195.380222ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:40.458411    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:40.458421    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.458432    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.458443    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.462171    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:40.462629    4110 pod_ready.go:98] node "ha-857000-m02" hosting pod "kube-proxy-zrqvr" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:40.462643    4110 pod_ready.go:82] duration metric: took 399.490798ms for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	E0917 02:11:40.462673    4110 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-857000-m02" hosting pod "kube-proxy-zrqvr" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:40.462687    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:40.658101    4110 request.go:632] Waited for 195.359912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:11:40.658147    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:11:40.658152    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.658159    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.658164    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.660407    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:40.858455    4110 request.go:632] Waited for 197.559018ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:40.858564    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:40.858583    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.858595    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.858601    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.861876    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:40.862327    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:40.862336    4110 pod_ready.go:82] duration metric: took 399.635382ms for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:40.862343    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:41.057949    4110 request.go:632] Waited for 195.512959ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:11:41.058021    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:11:41.058032    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.058044    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.058051    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.061708    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:41.257802    4110 request.go:632] Waited for 195.475163ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:41.257884    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:41.257895    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.257906    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.257913    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.261190    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:41.261502    4110 pod_ready.go:98] node "ha-857000-m02" hosting pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:41.261513    4110 pod_ready.go:82] duration metric: took 399.156939ms for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	E0917 02:11:41.261527    4110 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-857000-m02" hosting pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:41.261532    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:41.458981    4110 request.go:632] Waited for 197.407496ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:11:41.459061    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:11:41.459070    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.459078    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.459084    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.461880    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:41.657846    4110 request.go:632] Waited for 195.542216ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:41.657906    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:41.657913    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.657921    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.657934    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.660204    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:41.660601    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:41.660610    4110 pod_ready.go:82] duration metric: took 399.066544ms for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:41.660617    4110 pod_ready.go:39] duration metric: took 5.403454072s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:11:41.660636    4110 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:11:41.660697    4110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:11:41.672821    4110 api_server.go:72] duration metric: took 13.928795458s to wait for apiserver process to appear ...
	I0917 02:11:41.672831    4110 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:11:41.672845    4110 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0917 02:11:41.683603    4110 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0917 02:11:41.683654    4110 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0917 02:11:41.683660    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.683666    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.683670    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.684276    4110 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 02:11:41.684340    4110 api_server.go:141] control plane version: v1.31.1
	I0917 02:11:41.684350    4110 api_server.go:131] duration metric: took 11.515194ms to wait for apiserver health ...
	I0917 02:11:41.684356    4110 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 02:11:41.857675    4110 request.go:632] Waited for 173.274042ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:41.857793    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:41.857803    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.857823    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.857833    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.863157    4110 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 02:11:41.868330    4110 system_pods.go:59] 26 kube-system pods found
	I0917 02:11:41.868348    4110 system_pods.go:61] "coredns-7c65d6cfc9-fg65r" [1690cf49-3cd5-45ba-bcff-6c2947fb1bef] Running
	I0917 02:11:41.868352    4110 system_pods.go:61] "coredns-7c65d6cfc9-nl5j5" [dad0f1c6-0feb-4024-b0ee-95776c68bae8] Running
	I0917 02:11:41.868360    4110 system_pods.go:61] "etcd-ha-857000" [e9da37c5-ad0f-47e0-b65f-361d2cb9f204] Running
	I0917 02:11:41.868366    4110 system_pods.go:61] "etcd-ha-857000-m02" [f540f85a-9556-44c0-a560-188721a58bd5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 02:11:41.868371    4110 system_pods.go:61] "etcd-ha-857000-m03" [46cb6c7e-73e2-49b0-986f-c63a23ffa29d] Running
	I0917 02:11:41.868377    4110 system_pods.go:61] "kindnet-4jk9v" [24a018c6-9cbb-4d17-a295-8fef456534a0] Running
	I0917 02:11:41.868392    4110 system_pods.go:61] "kindnet-7pf7v" [eecd1421-3a2f-4e48-b2b2-abcbef7869e7] Running
	I0917 02:11:41.868398    4110 system_pods.go:61] "kindnet-vc6z5" [a92e41cb-74d1-4bd4-bffd-b807c859efd3] Running
	I0917 02:11:41.868402    4110 system_pods.go:61] "kindnet-vh2h2" [00d83b6a-e87e-4c36-b073-36e67c76d67d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 02:11:41.868406    4110 system_pods.go:61] "kube-apiserver-ha-857000" [b5451c9b-3be2-4454-bed6-fdc48031180e] Running
	I0917 02:11:41.868424    4110 system_pods.go:61] "kube-apiserver-ha-857000-m02" [d043a0e6-6ab2-47b6-bb82-ceff496f8336] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 02:11:41.868430    4110 system_pods.go:61] "kube-apiserver-ha-857000-m03" [b3d91c89-8830-4dd9-8c20-4d6f821a3d88] Running
	I0917 02:11:41.868434    4110 system_pods.go:61] "kube-controller-manager-ha-857000" [f4b0f56c-8d7d-4567-a4eb-088805c36c54] Running
	I0917 02:11:41.868438    4110 system_pods.go:61] "kube-controller-manager-ha-857000-m02" [16670a48-0dc2-4665-9b1d-5fc25337ec58] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 02:11:41.868442    4110 system_pods.go:61] "kube-controller-manager-ha-857000-m03" [754fdff4-0e27-4f32-ab12-3a6695924396] Running
	I0917 02:11:41.868445    4110 system_pods.go:61] "kube-proxy-528ht" [18b29dac-e4bf-4f26-988f-b1ba4019f9bc] Running
	I0917 02:11:41.868448    4110 system_pods.go:61] "kube-proxy-g9wxm" [a5b974f1-ed28-4f42-8c86-6fed0bf32317] Running
	I0917 02:11:41.868450    4110 system_pods.go:61] "kube-proxy-vskbj" [7a396757-8954-48d2-b708-dcdfbab21dc7] Running
	I0917 02:11:41.868454    4110 system_pods.go:61] "kube-proxy-zrqvr" [bb98af15-e15a-4904-b68d-2de3c6e5864c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 02:11:41.868456    4110 system_pods.go:61] "kube-scheduler-ha-857000" [fdce329c-0340-4e0a-8bc1-295e4970708b] Running
	I0917 02:11:41.868468    4110 system_pods.go:61] "kube-scheduler-ha-857000-m02" [274256f9-cd98-4dd3-befc-35443dcca033] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 02:11:41.868473    4110 system_pods.go:61] "kube-scheduler-ha-857000-m03" [74d4633a-1c51-478d-9c68-d05c964089d9] Running
	I0917 02:11:41.868484    4110 system_pods.go:61] "kube-vip-ha-857000" [84b805d8-9a8f-4c6f-b18f-76c98ca4776c] Running
	I0917 02:11:41.868488    4110 system_pods.go:61] "kube-vip-ha-857000-m02" [b194d3e4-3b4d-4075-a6b1-f05d219393a0] Running
	I0917 02:11:41.868490    4110 system_pods.go:61] "kube-vip-ha-857000-m03" [29be8efa-f99f-460a-80bd-ccc25f608e48] Running
	I0917 02:11:41.868493    4110 system_pods.go:61] "storage-provisioner" [d81e7b55-a14e-4dc7-9193-ebe6914cdacf] Running
	I0917 02:11:41.868498    4110 system_pods.go:74] duration metric: took 184.134673ms to wait for pod list to return data ...
	I0917 02:11:41.868509    4110 default_sa.go:34] waiting for default service account to be created ...
	I0917 02:11:42.057457    4110 request.go:632] Waited for 188.887232ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:11:42.057501    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:11:42.057507    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:42.057512    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:42.057516    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:42.060122    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:42.060299    4110 default_sa.go:45] found service account: "default"
	I0917 02:11:42.060314    4110 default_sa.go:55] duration metric: took 191.792113ms for default service account to be created ...
	I0917 02:11:42.060320    4110 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 02:11:42.257458    4110 request.go:632] Waited for 197.098839ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:42.257490    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:42.257495    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:42.257501    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:42.257506    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:42.261392    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:42.267316    4110 system_pods.go:86] 26 kube-system pods found
	I0917 02:11:42.267336    4110 system_pods.go:89] "coredns-7c65d6cfc9-fg65r" [1690cf49-3cd5-45ba-bcff-6c2947fb1bef] Running
	I0917 02:11:42.267340    4110 system_pods.go:89] "coredns-7c65d6cfc9-nl5j5" [dad0f1c6-0feb-4024-b0ee-95776c68bae8] Running
	I0917 02:11:42.267343    4110 system_pods.go:89] "etcd-ha-857000" [e9da37c5-ad0f-47e0-b65f-361d2cb9f204] Running
	I0917 02:11:42.267356    4110 system_pods.go:89] "etcd-ha-857000-m02" [f540f85a-9556-44c0-a560-188721a58bd5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 02:11:42.267362    4110 system_pods.go:89] "etcd-ha-857000-m03" [46cb6c7e-73e2-49b0-986f-c63a23ffa29d] Running
	I0917 02:11:42.267366    4110 system_pods.go:89] "kindnet-4jk9v" [24a018c6-9cbb-4d17-a295-8fef456534a0] Running
	I0917 02:11:42.267369    4110 system_pods.go:89] "kindnet-7pf7v" [eecd1421-3a2f-4e48-b2b2-abcbef7869e7] Running
	I0917 02:11:42.267372    4110 system_pods.go:89] "kindnet-vc6z5" [a92e41cb-74d1-4bd4-bffd-b807c859efd3] Running
	I0917 02:11:42.267377    4110 system_pods.go:89] "kindnet-vh2h2" [00d83b6a-e87e-4c36-b073-36e67c76d67d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 02:11:42.267380    4110 system_pods.go:89] "kube-apiserver-ha-857000" [b5451c9b-3be2-4454-bed6-fdc48031180e] Running
	I0917 02:11:42.267385    4110 system_pods.go:89] "kube-apiserver-ha-857000-m02" [d043a0e6-6ab2-47b6-bb82-ceff496f8336] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 02:11:42.267389    4110 system_pods.go:89] "kube-apiserver-ha-857000-m03" [b3d91c89-8830-4dd9-8c20-4d6f821a3d88] Running
	I0917 02:11:42.267392    4110 system_pods.go:89] "kube-controller-manager-ha-857000" [f4b0f56c-8d7d-4567-a4eb-088805c36c54] Running
	I0917 02:11:42.267398    4110 system_pods.go:89] "kube-controller-manager-ha-857000-m02" [16670a48-0dc2-4665-9b1d-5fc25337ec58] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 02:11:42.267402    4110 system_pods.go:89] "kube-controller-manager-ha-857000-m03" [754fdff4-0e27-4f32-ab12-3a6695924396] Running
	I0917 02:11:42.267405    4110 system_pods.go:89] "kube-proxy-528ht" [18b29dac-e4bf-4f26-988f-b1ba4019f9bc] Running
	I0917 02:11:42.267408    4110 system_pods.go:89] "kube-proxy-g9wxm" [a5b974f1-ed28-4f42-8c86-6fed0bf32317] Running
	I0917 02:11:42.267411    4110 system_pods.go:89] "kube-proxy-vskbj" [7a396757-8954-48d2-b708-dcdfbab21dc7] Running
	I0917 02:11:42.267415    4110 system_pods.go:89] "kube-proxy-zrqvr" [bb98af15-e15a-4904-b68d-2de3c6e5864c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 02:11:42.267419    4110 system_pods.go:89] "kube-scheduler-ha-857000" [fdce329c-0340-4e0a-8bc1-295e4970708b] Running
	I0917 02:11:42.267423    4110 system_pods.go:89] "kube-scheduler-ha-857000-m02" [274256f9-cd98-4dd3-befc-35443dcca033] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 02:11:42.267427    4110 system_pods.go:89] "kube-scheduler-ha-857000-m03" [74d4633a-1c51-478d-9c68-d05c964089d9] Running
	I0917 02:11:42.267436    4110 system_pods.go:89] "kube-vip-ha-857000" [84b805d8-9a8f-4c6f-b18f-76c98ca4776c] Running
	I0917 02:11:42.267438    4110 system_pods.go:89] "kube-vip-ha-857000-m02" [b194d3e4-3b4d-4075-a6b1-f05d219393a0] Running
	I0917 02:11:42.267441    4110 system_pods.go:89] "kube-vip-ha-857000-m03" [29be8efa-f99f-460a-80bd-ccc25f608e48] Running
	I0917 02:11:42.267444    4110 system_pods.go:89] "storage-provisioner" [d81e7b55-a14e-4dc7-9193-ebe6914cdacf] Running
	I0917 02:11:42.267448    4110 system_pods.go:126] duration metric: took 207.120728ms to wait for k8s-apps to be running ...
	I0917 02:11:42.267459    4110 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 02:11:42.267525    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:11:42.280323    4110 system_svc.go:56] duration metric: took 12.855514ms WaitForService to wait for kubelet
	I0917 02:11:42.280342    4110 kubeadm.go:582] duration metric: took 14.536306226s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:11:42.280356    4110 node_conditions.go:102] verifying NodePressure condition ...
	I0917 02:11:42.458901    4110 request.go:632] Waited for 178.497588ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0917 02:11:42.458965    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0917 02:11:42.458970    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:42.458975    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:42.458980    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:42.461607    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:42.462345    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:11:42.462358    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:11:42.462367    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:11:42.462370    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:11:42.462374    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:11:42.462377    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:11:42.462380    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:11:42.462383    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:11:42.462386    4110 node_conditions.go:105] duration metric: took 182.022805ms to run NodePressure ...
	I0917 02:11:42.462394    4110 start.go:241] waiting for startup goroutines ...
	I0917 02:11:42.462412    4110 start.go:255] writing updated cluster config ...
	I0917 02:11:42.484336    4110 out.go:201] 
	I0917 02:11:42.505774    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:42.505869    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:11:42.527331    4110 out.go:177] * Starting "ha-857000-m03" control-plane node in "ha-857000" cluster
	I0917 02:11:42.569515    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:11:42.569551    4110 cache.go:56] Caching tarball of preloaded images
	I0917 02:11:42.569751    4110 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:11:42.569769    4110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:11:42.569891    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:11:42.570622    4110 start.go:360] acquireMachinesLock for ha-857000-m03: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:11:42.570733    4110 start.go:364] duration metric: took 89.66µs to acquireMachinesLock for "ha-857000-m03"
	I0917 02:11:42.570758    4110 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:11:42.570766    4110 fix.go:54] fixHost starting: m03
	I0917 02:11:42.571203    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:11:42.571238    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:11:42.581037    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52144
	I0917 02:11:42.581469    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:11:42.581811    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:11:42.581822    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:11:42.582051    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:11:42.582209    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:42.582294    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetState
	I0917 02:11:42.582428    4110 main.go:141] libmachine: (ha-857000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:11:42.582545    4110 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid from json: 3442
	I0917 02:11:42.583498    4110 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid 3442 missing from process table
	I0917 02:11:42.583556    4110 fix.go:112] recreateIfNeeded on ha-857000-m03: state=Stopped err=<nil>
	I0917 02:11:42.583568    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	W0917 02:11:42.583655    4110 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:11:42.604438    4110 out.go:177] * Restarting existing hyperkit VM for "ha-857000-m03" ...
	I0917 02:11:42.678579    4110 main.go:141] libmachine: (ha-857000-m03) Calling .Start
	I0917 02:11:42.678864    4110 main.go:141] libmachine: (ha-857000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:11:42.678945    4110 main.go:141] libmachine: (ha-857000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/hyperkit.pid
	I0917 02:11:42.680796    4110 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid 3442 missing from process table
	I0917 02:11:42.680811    4110 main.go:141] libmachine: (ha-857000-m03) DBG | pid 3442 is in state "Stopped"
	I0917 02:11:42.680856    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/hyperkit.pid...
	I0917 02:11:42.681059    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Using UUID 3d8fdba9-dbf7-47ea-a80b-a24a99cad96e
	I0917 02:11:42.708058    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Generated MAC 16:4d:1d:5e:91:c8
	I0917 02:11:42.708080    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:11:42.708229    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3d8fdba9-dbf7-47ea-a80b-a24a99cad96e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000398c90)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:11:42.708256    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3d8fdba9-dbf7-47ea-a80b-a24a99cad96e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000398c90)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:11:42.708317    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3d8fdba9-dbf7-47ea-a80b-a24a99cad96e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/ha-857000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machine
s/ha-857000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:11:42.708369    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3d8fdba9-dbf7-47ea-a80b-a24a99cad96e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/ha-857000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:11:42.708386    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:11:42.710198    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: Pid is 4146
	I0917 02:11:42.710768    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Attempt 0
	I0917 02:11:42.710795    4110 main.go:141] libmachine: (ha-857000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:11:42.710847    4110 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid from json: 4146
	I0917 02:11:42.712907    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Searching for 16:4d:1d:5e:91:c8 in /var/db/dhcpd_leases ...
	I0917 02:11:42.712978    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:11:42.713009    4110 main.go:141] libmachine: (ha-857000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:11:42.713035    4110 main.go:141] libmachine: (ha-857000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:11:42.713060    4110 main.go:141] libmachine: (ha-857000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94661}
	I0917 02:11:42.713079    4110 main.go:141] libmachine: (ha-857000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9724}
	I0917 02:11:42.713098    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Found match: 16:4d:1d:5e:91:c8
	I0917 02:11:42.713110    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetConfigRaw
	I0917 02:11:42.713129    4110 main.go:141] libmachine: (ha-857000-m03) DBG | IP: 192.169.0.7
	I0917 02:11:42.713812    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetIP
	I0917 02:11:42.714067    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:11:42.714634    4110 machine.go:93] provisionDockerMachine start ...
	I0917 02:11:42.714648    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:42.714804    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:42.714912    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:42.715030    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:42.715172    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:42.715275    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:42.715462    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:42.715719    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:42.715729    4110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:11:42.719370    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:11:42.729567    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:11:42.730522    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:11:42.730552    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:11:42.730564    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:11:42.730573    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:11:43.130217    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:11:43.130237    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:11:43.246057    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:11:43.246080    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:11:43.246089    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:11:43.246096    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:11:43.246900    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:11:43.246909    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:11:48.954281    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:48 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:11:48.954379    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:48 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:11:48.954390    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:48 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:11:48.977816    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:48 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:11:53.786367    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:11:53.786383    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetMachineName
	I0917 02:11:53.786507    4110 buildroot.go:166] provisioning hostname "ha-857000-m03"
	I0917 02:11:53.786518    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetMachineName
	I0917 02:11:53.786619    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:53.786716    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:53.786814    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:53.786901    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:53.786991    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:53.787125    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:53.787256    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:53.787264    4110 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000-m03 && echo "ha-857000-m03" | sudo tee /etc/hostname
	I0917 02:11:53.860809    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000-m03
	
	I0917 02:11:53.860831    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:53.860995    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:53.861092    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:53.861199    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:53.861302    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:53.861448    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:53.861610    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:53.861621    4110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:11:53.932575    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:11:53.932592    4110 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:11:53.932604    4110 buildroot.go:174] setting up certificates
	I0917 02:11:53.932611    4110 provision.go:84] configureAuth start
	I0917 02:11:53.932618    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetMachineName
	I0917 02:11:53.932757    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetIP
	I0917 02:11:53.932853    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:53.932933    4110 provision.go:143] copyHostCerts
	I0917 02:11:53.932962    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:11:53.933012    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:11:53.933018    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:11:53.933153    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:11:53.933356    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:11:53.933385    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:11:53.933389    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:11:53.933461    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:11:53.933602    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:11:53.933640    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:11:53.933645    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:11:53.933711    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:11:53.933855    4110 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000-m03 san=[127.0.0.1 192.169.0.7 ha-857000-m03 localhost minikube]
	I0917 02:11:54.077333    4110 provision.go:177] copyRemoteCerts
	I0917 02:11:54.077392    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:11:54.077407    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:54.077544    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:54.077643    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.077738    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:54.077820    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	I0917 02:11:54.116797    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:11:54.116876    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 02:11:54.136202    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:11:54.136278    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:11:54.156340    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:11:54.156419    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:11:54.175630    4110 provision.go:87] duration metric: took 243.006586ms to configureAuth
	I0917 02:11:54.175645    4110 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:11:54.175825    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:54.175845    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:54.175978    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:54.176072    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:54.176183    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.176286    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.176390    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:54.176544    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:54.176682    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:54.176690    4110 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:11:54.238979    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:11:54.238993    4110 buildroot.go:70] root file system type: tmpfs
	I0917 02:11:54.239102    4110 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:11:54.239114    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:54.239249    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:54.239359    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.239453    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.239547    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:54.239702    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:54.239844    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:54.239889    4110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:11:54.314599    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:11:54.314621    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:54.314767    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:54.314854    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.314947    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.315024    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:54.315150    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:54.315292    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:54.315304    4110 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:11:55.935197    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:11:55.935211    4110 machine.go:96] duration metric: took 13.220338614s to provisionDockerMachine
	I0917 02:11:55.935219    4110 start.go:293] postStartSetup for "ha-857000-m03" (driver="hyperkit")
	I0917 02:11:55.935226    4110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:11:55.935240    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:55.935436    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:11:55.935456    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:55.935555    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:55.935640    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:55.935720    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:55.935796    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	I0917 02:11:55.975655    4110 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:11:55.982326    4110 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:11:55.982340    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:11:55.982439    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:11:55.982583    4110 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:11:55.982589    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:11:55.982752    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:11:55.995355    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:11:56.016063    4110 start.go:296] duration metric: took 80.833975ms for postStartSetup
	I0917 02:11:56.016085    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.016278    4110 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:11:56.016292    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:56.016390    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:56.016474    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.016549    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:56.016621    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	I0917 02:11:56.056575    4110 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:11:56.056644    4110 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:11:56.090435    4110 fix.go:56] duration metric: took 13.519431085s for fixHost
	I0917 02:11:56.090460    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:56.090600    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:56.090686    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.090776    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.090860    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:56.090993    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:56.091136    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:56.091142    4110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:11:56.155623    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726564316.081021180
	
	I0917 02:11:56.155639    4110 fix.go:216] guest clock: 1726564316.081021180
	I0917 02:11:56.155645    4110 fix.go:229] Guest: 2024-09-17 02:11:56.08102118 -0700 PDT Remote: 2024-09-17 02:11:56.09045 -0700 PDT m=+89.019475712 (delta=-9.42882ms)
	I0917 02:11:56.155656    4110 fix.go:200] guest clock delta is within tolerance: -9.42882ms
	I0917 02:11:56.155660    4110 start.go:83] releasing machines lock for "ha-857000-m03", held for 13.584681554s
	I0917 02:11:56.155677    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.155816    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetIP
	I0917 02:11:56.177120    4110 out.go:177] * Found network options:
	I0917 02:11:56.197056    4110 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0917 02:11:56.217835    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:11:56.217862    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:56.217881    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.218511    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.218685    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.218846    4110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0917 02:11:56.218876    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:56.218892    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	W0917 02:11:56.218898    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:56.219005    4110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:11:56.219024    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:56.219078    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:56.219246    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.219309    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:56.219439    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:56.219492    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.219585    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	I0917 02:11:56.219614    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:56.219751    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	W0917 02:11:56.256644    4110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:11:56.256720    4110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:11:56.309886    4110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:11:56.309904    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:11:56.309980    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:11:56.326165    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:11:56.334717    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:11:56.343026    4110 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:11:56.343079    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:11:56.351351    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:11:56.359978    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:11:56.368445    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:11:56.376813    4110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:11:56.385309    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:11:56.393895    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:11:56.402441    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:11:56.410891    4110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:11:56.418564    4110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:11:56.426298    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:56.529182    4110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:11:56.548629    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:11:56.548711    4110 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:11:56.564564    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:11:56.575668    4110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:11:56.592483    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:11:56.605747    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:11:56.616286    4110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:11:56.636099    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:11:56.646661    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:11:56.662025    4110 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:11:56.665163    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:11:56.672775    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:11:56.686783    4110 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:11:56.787618    4110 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:11:56.902014    4110 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:11:56.902043    4110 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:11:56.916683    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:57.010321    4110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:11:59.292351    4110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.28197073s)
	I0917 02:11:59.292423    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:11:59.302881    4110 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:11:59.315909    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:11:59.326097    4110 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:11:59.423622    4110 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:11:59.534194    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:59.650222    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:11:59.664197    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:11:59.675195    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:59.768785    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:11:59.834137    4110 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:11:59.834234    4110 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:11:59.838654    4110 start.go:563] Will wait 60s for crictl version
	I0917 02:11:59.838726    4110 ssh_runner.go:195] Run: which crictl
	I0917 02:11:59.844060    4110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:11:59.874850    4110 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:11:59.874944    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:11:59.893142    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:11:59.934010    4110 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:11:59.974908    4110 out.go:177]   - env NO_PROXY=192.169.0.5
	I0917 02:11:59.996010    4110 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0917 02:12:00.016678    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetIP
	I0917 02:12:00.016979    4110 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:12:00.020450    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:12:00.029942    4110 mustload.go:65] Loading cluster: ha-857000
	I0917 02:12:00.030121    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:00.030345    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:00.030368    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:00.039149    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52286
	I0917 02:12:00.039489    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:00.039838    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:00.039856    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:00.040084    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:00.040206    4110 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:12:00.040304    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:00.040367    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 4124
	I0917 02:12:00.041347    4110 host.go:66] Checking if "ha-857000" exists ...
	I0917 02:12:00.041604    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:00.041629    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:00.050248    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52288
	I0917 02:12:00.050590    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:00.050943    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:00.050963    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:00.051142    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:00.051249    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:12:00.051358    4110 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.7
	I0917 02:12:00.051364    4110 certs.go:194] generating shared ca certs ...
	I0917 02:12:00.051373    4110 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:12:00.051518    4110 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:12:00.051569    4110 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:12:00.051578    4110 certs.go:256] generating profile certs ...
	I0917 02:12:00.051672    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key
	I0917 02:12:00.051762    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.daf177bc
	I0917 02:12:00.051812    4110 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key
	I0917 02:12:00.051819    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:12:00.051841    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:12:00.051859    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:12:00.051878    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:12:00.051895    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 02:12:00.051919    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 02:12:00.051943    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 02:12:00.051962    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 02:12:00.052037    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:12:00.052085    4110 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:12:00.052093    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:12:00.052128    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:12:00.052160    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:12:00.052188    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:12:00.052263    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:12:00.052296    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:12:00.052317    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:00.052334    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:12:00.052362    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:12:00.052450    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:12:00.052535    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:12:00.052624    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:12:00.052722    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:12:00.080096    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 02:12:00.083244    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 02:12:00.090969    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 02:12:00.094112    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 02:12:00.101834    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 02:12:00.104986    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 02:12:00.113430    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 02:12:00.116712    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 02:12:00.124546    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 02:12:00.127709    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 02:12:00.135587    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 02:12:00.138750    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 02:12:00.147884    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:12:00.168533    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:12:00.188900    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:12:00.208781    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:12:00.229275    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 02:12:00.248994    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 02:12:00.269569    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:12:00.289646    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 02:12:00.309509    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:12:00.329488    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:12:00.349487    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:12:00.369414    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 02:12:00.383327    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 02:12:00.396803    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 02:12:00.410693    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 02:12:00.424533    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 02:12:00.438144    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 02:12:00.451710    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 02:12:00.465698    4110 ssh_runner.go:195] Run: openssl version
	I0917 02:12:00.470190    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:12:00.478670    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:12:00.482005    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:12:00.482051    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:12:00.486183    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:12:00.494427    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:12:00.503098    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:00.506593    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:00.506643    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:00.510950    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:12:00.519387    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:12:00.527796    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:12:00.531174    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:12:00.531231    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:12:00.535528    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:12:00.543734    4110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:12:00.547058    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:12:00.551336    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:12:00.555666    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:12:00.560095    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:12:00.564671    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:12:00.568907    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:12:00.573116    4110 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.1 docker true true} ...
	I0917 02:12:00.573181    4110 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:12:00.573213    4110 kube-vip.go:115] generating kube-vip config ...
	I0917 02:12:00.573252    4110 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 02:12:00.585709    4110 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 02:12:00.585750    4110 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 02:12:00.585815    4110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:12:00.593621    4110 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:12:00.593672    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 02:12:00.600967    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 02:12:00.614925    4110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:12:00.628761    4110 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 02:12:00.642265    4110 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:12:00.645102    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:12:00.654336    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:00.752482    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:12:00.767122    4110 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:12:00.767316    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:00.788252    4110 out.go:177] * Verifying Kubernetes components...
	I0917 02:12:00.808843    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:00.927434    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:12:00.944321    4110 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:12:00.944565    4110 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x673a720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 02:12:00.944614    4110 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0917 02:12:00.944789    4110 node_ready.go:35] waiting up to 6m0s for node "ha-857000-m03" to be "Ready" ...
	I0917 02:12:00.944851    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:00.944858    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.944867    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.944872    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.946764    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:00.947061    4110 node_ready.go:49] node "ha-857000-m03" has status "Ready":"True"
	I0917 02:12:00.947072    4110 node_ready.go:38] duration metric: took 2.273862ms for node "ha-857000-m03" to be "Ready" ...
	I0917 02:12:00.947078    4110 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:12:00.947127    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:00.947133    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.947139    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.947143    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.950970    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:00.956449    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.956504    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:00.956511    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.956518    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.956526    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.959279    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.959653    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:00.959660    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.959666    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.959669    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.961657    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:00.962160    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:00.962170    4110 pod_ready.go:82] duration metric: took 5.706294ms for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.962176    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.962215    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nl5j5
	I0917 02:12:00.962221    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.962226    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.962230    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.966635    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:00.967113    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:00.967122    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.967128    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.967131    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.969231    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.969585    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:00.969594    4110 pod_ready.go:82] duration metric: took 7.413149ms for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.969601    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.969645    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000
	I0917 02:12:00.969650    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.969655    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.969659    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.971799    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.972247    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:00.972254    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.972264    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.972267    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.974411    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.974879    4110 pod_ready.go:93] pod "etcd-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:00.974888    4110 pod_ready.go:82] duration metric: took 5.282457ms for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.974895    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.974931    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m02
	I0917 02:12:00.974936    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.974941    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.974945    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.977288    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.977952    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:00.977959    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.977964    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.977966    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.980610    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.981051    4110 pod_ready.go:93] pod "etcd-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:00.981061    4110 pod_ready.go:82] duration metric: took 6.161283ms for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.981068    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.146340    4110 request.go:632] Waited for 165.222252ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m03
	I0917 02:12:01.146408    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m03
	I0917 02:12:01.146414    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.146420    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.146423    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.148663    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:01.345119    4110 request.go:632] Waited for 196.038973ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:01.345177    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:01.345186    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.345198    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.345210    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.348611    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:01.349143    4110 pod_ready.go:93] pod "etcd-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:01.349154    4110 pod_ready.go:82] duration metric: took 368.067559ms for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.349166    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.545007    4110 request.go:632] Waited for 195.782486ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:12:01.545050    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:12:01.545055    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.545061    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.545066    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.547602    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:01.745603    4110 request.go:632] Waited for 197.630153ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:01.745661    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:01.745667    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.745673    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.745676    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.748299    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:01.748902    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:01.748919    4110 pod_ready.go:82] duration metric: took 399.734114ms for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.748926    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.945883    4110 request.go:632] Waited for 196.866004ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:12:01.945945    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:12:01.945954    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.945964    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.945969    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.951958    4110 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 02:12:02.145413    4110 request.go:632] Waited for 192.798684ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:02.145468    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:02.145478    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.145511    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.145520    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.148357    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:02.149190    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:02.149203    4110 pod_ready.go:82] duration metric: took 400.265258ms for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:02.149211    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:02.345683    4110 request.go:632] Waited for 196.426528ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:02.345728    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:02.345736    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.345744    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.345751    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.348508    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:02.544925    4110 request.go:632] Waited for 196.020856ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:02.544994    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:02.545000    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.545006    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.545009    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.547483    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:02.744993    4110 request.go:632] Waited for 95.563815ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:02.745048    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:02.745054    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.745061    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.745065    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.747122    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:02.945441    4110 request.go:632] Waited for 197.559126ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:02.945475    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:02.945480    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.945486    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.945491    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.948036    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:03.150936    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:03.150968    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:03.150975    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:03.150980    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:03.153272    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:03.346424    4110 request.go:632] Waited for 192.442992ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:03.346514    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:03.346521    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:03.346528    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:03.346533    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:03.350998    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:03.649774    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:03.649809    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:03.649818    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:03.649823    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:03.652931    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:03.744972    4110 request.go:632] Waited for 90.967061ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:03.745023    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:03.745029    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:03.745034    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:03.745039    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:03.747431    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:04.149979    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:04.150024    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:04.150033    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:04.150037    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:04.153328    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:04.153812    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:04.153822    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:04.153828    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:04.153832    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:04.156074    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:04.156716    4110 pod_ready.go:103] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:04.650904    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:04.650924    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:04.650931    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:04.650946    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:04.653820    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:04.654378    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:04.654386    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:04.654393    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:04.654396    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:04.656654    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:05.151431    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:05.151485    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:05.151499    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:05.151506    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:05.154809    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:05.155323    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:05.155331    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:05.155337    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:05.155340    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:05.156965    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:05.650343    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:05.650367    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:05.650413    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:05.650421    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:05.653876    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:05.654508    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:05.654516    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:05.654522    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:05.654525    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:05.656260    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:06.149952    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:06.149982    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:06.149989    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:06.149994    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:06.152142    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:06.152594    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:06.152602    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:06.152608    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:06.152611    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:06.154378    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:06.650007    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:06.650040    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:06.650049    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:06.650053    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:06.652517    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:06.653131    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:06.653138    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:06.653144    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:06.653148    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:06.655153    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:06.655511    4110 pod_ready.go:103] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:07.150612    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:07.150642    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:07.150678    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:07.150687    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:07.153805    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:07.154498    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:07.154508    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:07.154516    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:07.154521    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:07.156264    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:07.650356    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:07.650381    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:07.650392    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:07.650401    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:07.653535    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:07.653958    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:07.653966    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:07.653972    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:07.653975    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:07.656337    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:08.150386    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:08.150440    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:08.150452    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:08.150460    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:08.153584    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:08.155108    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:08.155123    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:08.155132    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:08.155143    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:08.157038    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:08.650349    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:08.650377    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:08.650389    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:08.650398    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:08.654034    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:08.654828    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:08.654836    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:08.654843    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:08.654846    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:08.656625    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:08.656928    4110 pod_ready.go:103] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:09.151423    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:09.151447    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:09.151459    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:09.151464    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:09.154460    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:09.154947    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:09.154956    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:09.154961    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:09.154966    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:09.156555    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:09.650477    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:09.650503    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:09.650554    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:09.650568    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:09.653583    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:09.653960    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:09.653967    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:09.653973    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:09.653983    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:09.655828    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:10.149696    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:10.149720    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:10.149732    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:10.149739    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:10.153151    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:10.153716    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:10.153726    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:10.153734    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:10.153739    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:10.155758    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:10.649780    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:10.649830    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:10.649844    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:10.649854    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:10.653210    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:10.653938    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:10.653945    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:10.653951    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:10.653956    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:10.655718    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:11.149497    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:11.149512    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:11.149525    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:11.149530    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:11.151647    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:11.152174    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:11.152181    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:11.152187    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:11.152189    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:11.154098    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:11.154423    4110 pod_ready.go:103] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:11.650969    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:11.650998    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:11.651032    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:11.651039    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:11.654171    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:11.654962    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:11.654969    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:11.654975    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:11.654979    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:11.656692    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.150871    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:12.150884    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.150890    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.150893    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.153079    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:12.153733    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:12.153741    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.153747    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.153751    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.155608    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.650611    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:12.650636    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.650674    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.650684    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.654409    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:12.654934    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:12.654941    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.654951    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.654954    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.656676    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.657136    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:12.657145    4110 pod_ready.go:82] duration metric: took 10.507747852s for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.657152    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.657184    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:12:12.657189    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.657194    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.657198    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.658893    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.659304    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:12.659312    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.659317    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.659321    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.660920    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.661222    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:12.661230    4110 pod_ready.go:82] duration metric: took 4.073163ms for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.661237    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.661269    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:12:12.661274    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.661279    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.661282    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.662821    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.663178    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:12.663186    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.663192    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.663195    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.664635    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.665084    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:12.665092    4110 pod_ready.go:82] duration metric: took 3.849688ms for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.665098    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.665131    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:12.665136    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.665142    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.665157    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.666924    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.667551    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:12.667558    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.667564    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.667566    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.669116    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:13.165275    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:13.165342    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:13.165359    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:13.165367    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:13.168538    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:13.169042    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:13.169049    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:13.169054    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:13.169059    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:13.170903    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:13.665896    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:13.665914    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:13.665923    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:13.665930    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:13.668510    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:13.669059    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:13.669066    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:13.669071    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:13.669074    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:13.670842    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:14.165888    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:14.165910    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:14.165935    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:14.165941    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:14.168473    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:14.169111    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:14.169118    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:14.169124    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:14.169137    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:14.170994    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:14.667072    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:14.667128    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:14.667140    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:14.667151    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:14.670650    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:14.671210    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:14.671217    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:14.671222    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:14.671226    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:14.672859    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:14.673218    4110 pod_ready.go:103] pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:15.165335    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:15.165362    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:15.165375    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:15.165382    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:15.169212    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:15.169615    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:15.169623    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:15.169629    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:15.169633    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:15.171395    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:15.665422    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:15.665483    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:15.665498    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:15.665505    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:15.667889    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:15.668348    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:15.668356    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:15.668364    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:15.668369    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:15.670115    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.166085    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:16.166134    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.166147    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.166156    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.168879    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:16.169423    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:16.169430    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.169439    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.169442    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.171016    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.666749    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:16.666767    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.666797    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.666802    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.669480    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:16.669826    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:16.669832    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.669838    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.669842    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.671504    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.671930    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:16.671939    4110 pod_ready.go:82] duration metric: took 4.006767511s for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.671955    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.671990    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:12:16.671995    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.672000    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.672005    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.673862    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.674451    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:12:16.674459    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.674464    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.674468    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.676355    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.676667    4110 pod_ready.go:93] pod "kube-proxy-528ht" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:16.676675    4110 pod_ready.go:82] duration metric: took 4.715112ms for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.676682    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.676724    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:12:16.676729    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.676734    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.676738    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.678611    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.678986    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:16.678993    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.678999    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.679003    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.680713    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.681034    4110 pod_ready.go:93] pod "kube-proxy-g9wxm" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:16.681043    4110 pod_ready.go:82] duration metric: took 4.356651ms for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.681050    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.681091    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:12:16.681097    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.681102    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.681106    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.682940    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.683445    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:16.683452    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.683458    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.683462    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.685017    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.685461    4110 pod_ready.go:93] pod "kube-proxy-vskbj" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:16.685470    4110 pod_ready.go:82] duration metric: took 4.414596ms for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.685478    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.851971    4110 request.go:632] Waited for 166.418009ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:12:16.852035    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:12:16.852064    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.852076    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.852084    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.855683    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:17.050985    4110 request.go:632] Waited for 194.718198ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:17.051089    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:17.051098    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.051110    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.051119    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.054384    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:17.054876    4110 pod_ready.go:93] pod "kube-proxy-zrqvr" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:17.054889    4110 pod_ready.go:82] duration metric: took 369.398412ms for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.054898    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.250755    4110 request.go:632] Waited for 195.811261ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:12:17.250805    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:12:17.250817    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.250830    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.250841    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.291380    4110 round_trippers.go:574] Response Status: 200 OK in 40 milliseconds
	I0917 02:12:17.450914    4110 request.go:632] Waited for 157.443488ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:17.450956    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:17.450990    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.450996    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.450999    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.455828    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:17.456276    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:17.456286    4110 pod_ready.go:82] duration metric: took 401.376038ms for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.456294    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.651418    4110 request.go:632] Waited for 195.082221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:12:17.651455    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:12:17.651461    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.651471    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.651495    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.668422    4110 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0917 02:12:17.850764    4110 request.go:632] Waited for 181.996065ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:17.850819    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:17.850825    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.850832    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.850836    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.857947    4110 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 02:12:17.858420    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:17.858431    4110 pod_ready.go:82] duration metric: took 402.124989ms for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.858439    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:18.051442    4110 request.go:632] Waited for 192.93696ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:12:18.051483    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:12:18.051491    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.051499    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.051512    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.054127    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:18.250926    4110 request.go:632] Waited for 196.199352ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:18.250961    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:18.250966    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.251003    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.251008    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.274920    4110 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0917 02:12:18.275585    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:18.275595    4110 pod_ready.go:82] duration metric: took 417.143356ms for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:18.275606    4110 pod_ready.go:39] duration metric: took 17.328217726s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:12:18.275618    4110 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:12:18.275688    4110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:12:18.289040    4110 api_server.go:72] duration metric: took 17.521587147s to wait for apiserver process to appear ...
	I0917 02:12:18.289060    4110 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:12:18.289072    4110 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0917 02:12:18.292824    4110 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0917 02:12:18.292862    4110 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0917 02:12:18.292866    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.292872    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.292879    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.294137    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:18.294247    4110 api_server.go:141] control plane version: v1.31.1
	I0917 02:12:18.294257    4110 api_server.go:131] duration metric: took 5.192363ms to wait for apiserver health ...
	I0917 02:12:18.294263    4110 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 02:12:18.451185    4110 request.go:632] Waited for 156.882548ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:18.451216    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:18.451222    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.451248    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.451254    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.490169    4110 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0917 02:12:18.505194    4110 system_pods.go:59] 26 kube-system pods found
	I0917 02:12:18.505219    4110 system_pods.go:61] "coredns-7c65d6cfc9-fg65r" [1690cf49-3cd5-45ba-bcff-6c2947fb1bef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 02:12:18.505226    4110 system_pods.go:61] "coredns-7c65d6cfc9-nl5j5" [dad0f1c6-0feb-4024-b0ee-95776c68bae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 02:12:18.505231    4110 system_pods.go:61] "etcd-ha-857000" [e9da37c5-ad0f-47e0-b65f-361d2cb9f204] Running
	I0917 02:12:18.505234    4110 system_pods.go:61] "etcd-ha-857000-m02" [f540f85a-9556-44c0-a560-188721a58bd5] Running
	I0917 02:12:18.505237    4110 system_pods.go:61] "etcd-ha-857000-m03" [46cb6c7e-73e2-49b0-986f-c63a23ffa29d] Running
	I0917 02:12:18.505240    4110 system_pods.go:61] "kindnet-4jk9v" [24a018c6-9cbb-4d17-a295-8fef456534a0] Running
	I0917 02:12:18.505244    4110 system_pods.go:61] "kindnet-7pf7v" [eecd1421-3a2f-4e48-b2b2-abcbef7869e7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 02:12:18.505247    4110 system_pods.go:61] "kindnet-vc6z5" [a92e41cb-74d1-4bd4-bffd-b807c859efd3] Running
	I0917 02:12:18.505250    4110 system_pods.go:61] "kindnet-vh2h2" [00d83b6a-e87e-4c36-b073-36e67c76d67d] Running
	I0917 02:12:18.505273    4110 system_pods.go:61] "kube-apiserver-ha-857000" [b5451c9b-3be2-4454-bed6-fdc48031180e] Running
	I0917 02:12:18.505282    4110 system_pods.go:61] "kube-apiserver-ha-857000-m02" [d043a0e6-6ab2-47b6-bb82-ceff496f8336] Running
	I0917 02:12:18.505290    4110 system_pods.go:61] "kube-apiserver-ha-857000-m03" [b3d91c89-8830-4dd9-8c20-4d6f821a3d88] Running
	I0917 02:12:18.505313    4110 system_pods.go:61] "kube-controller-manager-ha-857000" [f4b0f56c-8d7d-4567-a4eb-088805c36c54] Running
	I0917 02:12:18.505323    4110 system_pods.go:61] "kube-controller-manager-ha-857000-m02" [16670a48-0dc2-4665-9b1d-5fc25337ec58] Running
	I0917 02:12:18.505338    4110 system_pods.go:61] "kube-controller-manager-ha-857000-m03" [754fdff4-0e27-4f32-ab12-3a6695924396] Running
	I0917 02:12:18.505343    4110 system_pods.go:61] "kube-proxy-528ht" [18b29dac-e4bf-4f26-988f-b1ba4019f9bc] Running
	I0917 02:12:18.505351    4110 system_pods.go:61] "kube-proxy-g9wxm" [a5b974f1-ed28-4f42-8c86-6fed0bf32317] Running
	I0917 02:12:18.505361    4110 system_pods.go:61] "kube-proxy-vskbj" [7a396757-8954-48d2-b708-dcdfbab21dc7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 02:12:18.505367    4110 system_pods.go:61] "kube-proxy-zrqvr" [bb98af15-e15a-4904-b68d-2de3c6e5864c] Running
	I0917 02:12:18.505373    4110 system_pods.go:61] "kube-scheduler-ha-857000" [fdce329c-0340-4e0a-8bc1-295e4970708b] Running
	I0917 02:12:18.505378    4110 system_pods.go:61] "kube-scheduler-ha-857000-m02" [274256f9-cd98-4dd3-befc-35443dcca033] Running
	I0917 02:12:18.505384    4110 system_pods.go:61] "kube-scheduler-ha-857000-m03" [74d4633a-1c51-478d-9c68-d05c964089d9] Running
	I0917 02:12:18.505388    4110 system_pods.go:61] "kube-vip-ha-857000" [84b805d8-9a8f-4c6f-b18f-76c98ca4776c] Running
	I0917 02:12:18.505392    4110 system_pods.go:61] "kube-vip-ha-857000-m02" [b194d3e4-3b4d-4075-a6b1-f05d219393a0] Running
	I0917 02:12:18.505396    4110 system_pods.go:61] "kube-vip-ha-857000-m03" [29be8efa-f99f-460a-80bd-ccc25f608e48] Running
	I0917 02:12:18.505399    4110 system_pods.go:61] "storage-provisioner" [d81e7b55-a14e-4dc7-9193-ebe6914cdacf] Running
	I0917 02:12:18.505406    4110 system_pods.go:74] duration metric: took 211.134036ms to wait for pod list to return data ...
	I0917 02:12:18.505413    4110 default_sa.go:34] waiting for default service account to be created ...
	I0917 02:12:18.650733    4110 request.go:632] Waited for 145.255733ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:12:18.650776    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:12:18.650782    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.650793    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.650798    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.659108    4110 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 02:12:18.659203    4110 default_sa.go:45] found service account: "default"
	I0917 02:12:18.659217    4110 default_sa.go:55] duration metric: took 153.795915ms for default service account to be created ...
	I0917 02:12:18.659227    4110 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 02:12:18.851528    4110 request.go:632] Waited for 192.225662ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:18.851585    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:18.851591    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.851597    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.851600    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.855716    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:18.861599    4110 system_pods.go:86] 26 kube-system pods found
	I0917 02:12:18.861618    4110 system_pods.go:89] "coredns-7c65d6cfc9-fg65r" [1690cf49-3cd5-45ba-bcff-6c2947fb1bef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 02:12:18.861630    4110 system_pods.go:89] "coredns-7c65d6cfc9-nl5j5" [dad0f1c6-0feb-4024-b0ee-95776c68bae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 02:12:18.861635    4110 system_pods.go:89] "etcd-ha-857000" [e9da37c5-ad0f-47e0-b65f-361d2cb9f204] Running
	I0917 02:12:18.861638    4110 system_pods.go:89] "etcd-ha-857000-m02" [f540f85a-9556-44c0-a560-188721a58bd5] Running
	I0917 02:12:18.861642    4110 system_pods.go:89] "etcd-ha-857000-m03" [46cb6c7e-73e2-49b0-986f-c63a23ffa29d] Running
	I0917 02:12:18.861645    4110 system_pods.go:89] "kindnet-4jk9v" [24a018c6-9cbb-4d17-a295-8fef456534a0] Running
	I0917 02:12:18.861649    4110 system_pods.go:89] "kindnet-7pf7v" [eecd1421-3a2f-4e48-b2b2-abcbef7869e7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 02:12:18.861653    4110 system_pods.go:89] "kindnet-vc6z5" [a92e41cb-74d1-4bd4-bffd-b807c859efd3] Running
	I0917 02:12:18.861657    4110 system_pods.go:89] "kindnet-vh2h2" [00d83b6a-e87e-4c36-b073-36e67c76d67d] Running
	I0917 02:12:18.861660    4110 system_pods.go:89] "kube-apiserver-ha-857000" [b5451c9b-3be2-4454-bed6-fdc48031180e] Running
	I0917 02:12:18.861663    4110 system_pods.go:89] "kube-apiserver-ha-857000-m02" [d043a0e6-6ab2-47b6-bb82-ceff496f8336] Running
	I0917 02:12:18.861666    4110 system_pods.go:89] "kube-apiserver-ha-857000-m03" [b3d91c89-8830-4dd9-8c20-4d6f821a3d88] Running
	I0917 02:12:18.861670    4110 system_pods.go:89] "kube-controller-manager-ha-857000" [f4b0f56c-8d7d-4567-a4eb-088805c36c54] Running
	I0917 02:12:18.861673    4110 system_pods.go:89] "kube-controller-manager-ha-857000-m02" [16670a48-0dc2-4665-9b1d-5fc25337ec58] Running
	I0917 02:12:18.861677    4110 system_pods.go:89] "kube-controller-manager-ha-857000-m03" [754fdff4-0e27-4f32-ab12-3a6695924396] Running
	I0917 02:12:18.861682    4110 system_pods.go:89] "kube-proxy-528ht" [18b29dac-e4bf-4f26-988f-b1ba4019f9bc] Running
	I0917 02:12:18.861685    4110 system_pods.go:89] "kube-proxy-g9wxm" [a5b974f1-ed28-4f42-8c86-6fed0bf32317] Running
	I0917 02:12:18.861690    4110 system_pods.go:89] "kube-proxy-vskbj" [7a396757-8954-48d2-b708-dcdfbab21dc7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 02:12:18.861694    4110 system_pods.go:89] "kube-proxy-zrqvr" [bb98af15-e15a-4904-b68d-2de3c6e5864c] Running
	I0917 02:12:18.861698    4110 system_pods.go:89] "kube-scheduler-ha-857000" [fdce329c-0340-4e0a-8bc1-295e4970708b] Running
	I0917 02:12:18.861701    4110 system_pods.go:89] "kube-scheduler-ha-857000-m02" [274256f9-cd98-4dd3-befc-35443dcca033] Running
	I0917 02:12:18.861704    4110 system_pods.go:89] "kube-scheduler-ha-857000-m03" [74d4633a-1c51-478d-9c68-d05c964089d9] Running
	I0917 02:12:18.861707    4110 system_pods.go:89] "kube-vip-ha-857000" [c577f2f1-ab99-4fbe-acc1-516a135f0377] Pending
	I0917 02:12:18.861710    4110 system_pods.go:89] "kube-vip-ha-857000-m02" [b194d3e4-3b4d-4075-a6b1-f05d219393a0] Running
	I0917 02:12:18.861713    4110 system_pods.go:89] "kube-vip-ha-857000-m03" [29be8efa-f99f-460a-80bd-ccc25f608e48] Running
	I0917 02:12:18.861715    4110 system_pods.go:89] "storage-provisioner" [d81e7b55-a14e-4dc7-9193-ebe6914cdacf] Running
	I0917 02:12:18.861720    4110 system_pods.go:126] duration metric: took 202.461636ms to wait for k8s-apps to be running ...
	I0917 02:12:18.861726    4110 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 02:12:18.861778    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:12:18.882032    4110 system_svc.go:56] duration metric: took 20.298661ms WaitForService to wait for kubelet
	I0917 02:12:18.882059    4110 kubeadm.go:582] duration metric: took 18.114595178s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:12:18.882083    4110 node_conditions.go:102] verifying NodePressure condition ...
	I0917 02:12:19.052878    4110 request.go:632] Waited for 170.643294ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0917 02:12:19.052938    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0917 02:12:19.052951    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:19.052966    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:19.052976    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:19.057011    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:19.057806    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:12:19.057817    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:12:19.057824    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:12:19.057827    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:12:19.057830    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:12:19.057834    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:12:19.057837    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:12:19.057840    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:12:19.057843    4110 node_conditions.go:105] duration metric: took 175.740836ms to run NodePressure ...
	I0917 02:12:19.057851    4110 start.go:241] waiting for startup goroutines ...
	I0917 02:12:19.057867    4110 start.go:255] writing updated cluster config ...
	I0917 02:12:19.079978    4110 out.go:201] 
	I0917 02:12:19.117280    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:19.117377    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:12:19.138898    4110 out.go:177] * Starting "ha-857000-m04" worker node in "ha-857000" cluster
	I0917 02:12:19.180945    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:12:19.180969    4110 cache.go:56] Caching tarball of preloaded images
	I0917 02:12:19.181086    4110 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:12:19.181097    4110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:12:19.181167    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:12:19.181757    4110 start.go:360] acquireMachinesLock for ha-857000-m04: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:12:19.181807    4110 start.go:364] duration metric: took 37.353µs to acquireMachinesLock for "ha-857000-m04"
	I0917 02:12:19.181825    4110 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:12:19.181830    4110 fix.go:54] fixHost starting: m04
	I0917 02:12:19.182086    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:19.182106    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:19.191065    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52292
	I0917 02:12:19.191452    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:19.191850    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:19.191867    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:19.192069    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:19.192186    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:19.192279    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetState
	I0917 02:12:19.192404    4110 main.go:141] libmachine: (ha-857000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:19.192500    4110 main.go:141] libmachine: (ha-857000-m04) DBG | hyperkit pid from json: 3550
	I0917 02:12:19.193450    4110 main.go:141] libmachine: (ha-857000-m04) DBG | hyperkit pid 3550 missing from process table
	I0917 02:12:19.193488    4110 fix.go:112] recreateIfNeeded on ha-857000-m04: state=Stopped err=<nil>
	I0917 02:12:19.193498    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	W0917 02:12:19.193587    4110 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:12:19.214824    4110 out.go:177] * Restarting existing hyperkit VM for "ha-857000-m04" ...
	I0917 02:12:19.289023    4110 main.go:141] libmachine: (ha-857000-m04) Calling .Start
	I0917 02:12:19.289295    4110 main.go:141] libmachine: (ha-857000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:19.289356    4110 main.go:141] libmachine: (ha-857000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/hyperkit.pid
	I0917 02:12:19.289453    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Using UUID 32bc812d-06ce-423b-90a4-5417ea5ec912
	I0917 02:12:19.319068    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Generated MAC a:b6:8:34:25:a6
	I0917 02:12:19.319111    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:12:19.319291    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"32bc812d-06ce-423b-90a4-5417ea5ec912", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000289980)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:12:19.319339    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"32bc812d-06ce-423b-90a4-5417ea5ec912", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000289980)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:12:19.319395    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "32bc812d-06ce-423b-90a4-5417ea5ec912", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/ha-857000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machine
s/ha-857000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:12:19.319498    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 32bc812d-06ce-423b-90a4-5417ea5ec912 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/ha-857000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:12:19.319538    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:12:19.321260    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: Pid is 4161
	I0917 02:12:19.321886    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Attempt 0
	I0917 02:12:19.321908    4110 main.go:141] libmachine: (ha-857000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:19.321989    4110 main.go:141] libmachine: (ha-857000-m04) DBG | hyperkit pid from json: 4161
	I0917 02:12:19.324366    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Searching for a:b6:8:34:25:a6 in /var/db/dhcpd_leases ...
	I0917 02:12:19.324461    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:12:19.324494    4110 main.go:141] libmachine: (ha-857000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:12:19.324519    4110 main.go:141] libmachine: (ha-857000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:12:19.324537    4110 main.go:141] libmachine: (ha-857000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:12:19.324552    4110 main.go:141] libmachine: (ha-857000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94661}
	I0917 02:12:19.324565    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Found match: a:b6:8:34:25:a6
	I0917 02:12:19.324580    4110 main.go:141] libmachine: (ha-857000-m04) DBG | IP: 192.169.0.8
	I0917 02:12:19.324586    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetConfigRaw
	I0917 02:12:19.325317    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetIP
	I0917 02:12:19.325565    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:12:19.326089    4110 machine.go:93] provisionDockerMachine start ...
	I0917 02:12:19.326109    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:19.326263    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:19.326401    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:19.326560    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:19.326727    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:19.326852    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:19.327048    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:19.327215    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:19.327223    4110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:12:19.329900    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:12:19.339917    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:12:19.340861    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:12:19.340880    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:12:19.340887    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:12:19.340906    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:12:19.732737    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:12:19.732752    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:12:19.847625    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:12:19.847643    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:12:19.847688    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:12:19.847715    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:12:19.848483    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:12:19.848501    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:12:25.591852    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:12:25.591915    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:12:25.591925    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:12:25.615174    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:12:29.572071    4110 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.8:22: connect: connection refused
	I0917 02:12:32.627647    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:12:32.627664    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetMachineName
	I0917 02:12:32.627799    4110 buildroot.go:166] provisioning hostname "ha-857000-m04"
	I0917 02:12:32.627808    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetMachineName
	I0917 02:12:32.627920    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.628014    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:32.628110    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.628210    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.628294    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:32.628431    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:32.628580    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:32.628587    4110 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000-m04 && echo "ha-857000-m04" | sudo tee /etc/hostname
	I0917 02:12:32.692963    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000-m04
	
	I0917 02:12:32.692980    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.693102    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:32.693193    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.693281    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.693375    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:32.693517    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:32.693670    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:32.693680    4110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:12:32.753597    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:12:32.753613    4110 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:12:32.753629    4110 buildroot.go:174] setting up certificates
	I0917 02:12:32.753635    4110 provision.go:84] configureAuth start
	I0917 02:12:32.753642    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetMachineName
	I0917 02:12:32.753783    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetIP
	I0917 02:12:32.753886    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.753973    4110 provision.go:143] copyHostCerts
	I0917 02:12:32.754002    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:12:32.754055    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:12:32.754061    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:12:32.754199    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:12:32.754425    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:12:32.754455    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:12:32.754465    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:12:32.754535    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:12:32.754684    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:12:32.754713    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:12:32.754717    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:12:32.754781    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:12:32.754925    4110 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000-m04 san=[127.0.0.1 192.169.0.8 ha-857000-m04 localhost minikube]
	I0917 02:12:32.886815    4110 provision.go:177] copyRemoteCerts
	I0917 02:12:32.886883    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:12:32.886900    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.887049    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:32.887156    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.887265    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:32.887345    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	I0917 02:12:32.921412    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:12:32.921483    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:12:32.942093    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:12:32.942165    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 02:12:32.962202    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:12:32.962278    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:12:32.982539    4110 provision.go:87] duration metric: took 228.892121ms to configureAuth
	I0917 02:12:32.982555    4110 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:12:32.982734    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:32.982747    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:32.982882    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.982965    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:32.983053    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.983146    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.983222    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:32.983341    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:32.983471    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:32.983479    4110 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:12:33.039112    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:12:33.039126    4110 buildroot.go:70] root file system type: tmpfs
	I0917 02:12:33.039209    4110 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:12:33.039225    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:33.039356    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:33.039463    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:33.039553    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:33.039642    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:33.039765    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:33.039901    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:33.039948    4110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:12:33.105290    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:12:33.105311    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:33.105463    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:33.105557    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:33.105679    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:33.105803    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:33.106006    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:33.106166    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:33.106179    4110 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:12:34.690044    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:12:34.690061    4110 machine.go:96] duration metric: took 15.363692529s to provisionDockerMachine
	I0917 02:12:34.690069    4110 start.go:293] postStartSetup for "ha-857000-m04" (driver="hyperkit")
	I0917 02:12:34.690105    4110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:12:34.690128    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.690331    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:12:34.690344    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:34.690448    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.690550    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.690643    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.690734    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	I0917 02:12:34.729693    4110 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:12:34.733386    4110 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:12:34.733399    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:12:34.733491    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:12:34.733629    4110 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:12:34.733635    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:12:34.733801    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:12:34.743555    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:12:34.777005    4110 start.go:296] duration metric: took 86.908647ms for postStartSetup
	I0917 02:12:34.777029    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.777213    4110 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:12:34.777227    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:34.777324    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.777401    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.777484    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.777560    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	I0917 02:12:34.811015    4110 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:12:34.811085    4110 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:12:34.865249    4110 fix.go:56] duration metric: took 15.683145042s for fixHost
	I0917 02:12:34.865277    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:34.865435    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.865528    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.865626    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.865720    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.865866    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:34.866008    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:34.866017    4110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:12:34.922683    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726564355.020144093
	
	I0917 02:12:34.922697    4110 fix.go:216] guest clock: 1726564355.020144093
	I0917 02:12:34.922703    4110 fix.go:229] Guest: 2024-09-17 02:12:35.020144093 -0700 PDT Remote: 2024-09-17 02:12:34.865267 -0700 PDT m=+127.793621612 (delta=154.877093ms)
	I0917 02:12:34.922714    4110 fix.go:200] guest clock delta is within tolerance: 154.877093ms
	I0917 02:12:34.922718    4110 start.go:83] releasing machines lock for "ha-857000-m04", held for 15.740632652s
	I0917 02:12:34.922744    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.922875    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetIP
	I0917 02:12:34.945234    4110 out.go:177] * Found network options:
	I0917 02:12:34.965134    4110 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0917 02:12:34.986412    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:12:34.986446    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:12:34.986459    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:12:34.986477    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.987363    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.987619    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.987838    4110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0917 02:12:34.987863    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:12:34.987882    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	W0917 02:12:34.987901    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:12:34.987917    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:12:34.988015    4110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:12:34.988040    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:34.988144    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.988241    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.988362    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.988430    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.988562    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.988636    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.988712    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	I0917 02:12:34.988798    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	W0917 02:12:35.089466    4110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:12:35.089538    4110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:12:35.103798    4110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:12:35.103814    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:12:35.103888    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:12:35.122855    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:12:35.131456    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:12:35.140120    4110 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:12:35.140187    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:12:35.148614    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:12:35.156897    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:12:35.165192    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:12:35.173754    4110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:12:35.182471    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:12:35.191008    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:12:35.199448    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:12:35.207926    4110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:12:35.216411    4110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:12:35.228568    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:35.327014    4110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:12:35.346549    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:12:35.346628    4110 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:12:35.370011    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:12:35.382502    4110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:12:35.397499    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:12:35.408840    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:12:35.420206    4110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:12:35.442422    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:12:35.453508    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:12:35.468375    4110 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:12:35.471279    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:12:35.479407    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:12:35.492955    4110 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:12:35.593589    4110 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:12:35.695477    4110 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:12:35.695504    4110 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:12:35.710594    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:35.826600    4110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:12:38.101010    4110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.274345081s)
	I0917 02:12:38.101138    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:12:38.113882    4110 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:12:38.128373    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:12:38.140107    4110 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:12:38.249684    4110 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:12:38.361672    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:38.469978    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:12:38.489760    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:12:38.502395    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:38.604591    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:12:38.669590    4110 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:12:38.669684    4110 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:12:38.674420    4110 start.go:563] Will wait 60s for crictl version
	I0917 02:12:38.674483    4110 ssh_runner.go:195] Run: which crictl
	I0917 02:12:38.677707    4110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:12:38.702126    4110 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:12:38.702225    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:12:38.719390    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:12:38.757457    4110 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:12:38.799117    4110 out.go:177]   - env NO_PROXY=192.169.0.5
	I0917 02:12:38.819990    4110 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0917 02:12:38.841085    4110 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	I0917 02:12:38.862007    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetIP
	I0917 02:12:38.862240    4110 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:12:38.865326    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:12:38.874823    4110 mustload.go:65] Loading cluster: ha-857000
	I0917 02:12:38.875009    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:38.875239    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:38.875265    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:38.884252    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52315
	I0917 02:12:38.884596    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:38.885007    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:38.885024    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:38.885217    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:38.885327    4110 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:12:38.885411    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:38.885502    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 4124
	I0917 02:12:38.886472    4110 host.go:66] Checking if "ha-857000" exists ...
	I0917 02:12:38.886740    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:38.886764    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:38.895399    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52317
	I0917 02:12:38.895752    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:38.896084    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:38.896095    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:38.896312    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:38.896445    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:12:38.896532    4110 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.8
	I0917 02:12:38.896538    4110 certs.go:194] generating shared ca certs ...
	I0917 02:12:38.896550    4110 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:12:38.896701    4110 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:12:38.896754    4110 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:12:38.896764    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:12:38.896789    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:12:38.896809    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:12:38.896826    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:12:38.896910    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:12:38.896963    4110 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:12:38.896974    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:12:38.897008    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:12:38.897042    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:12:38.897070    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:12:38.897139    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:12:38.897176    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:12:38.897196    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:12:38.897214    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:38.897242    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:12:38.917488    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:12:38.937120    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:12:38.956856    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:12:38.976762    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:12:38.997198    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:12:39.018037    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:12:39.040033    4110 ssh_runner.go:195] Run: openssl version
	I0917 02:12:39.044757    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:12:39.053844    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:12:39.057290    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:12:39.057337    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:12:39.061592    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:12:39.070092    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:12:39.078554    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:39.082016    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:39.082086    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:39.086282    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:12:39.094779    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:12:39.103890    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:12:39.107498    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:12:39.107551    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:12:39.111799    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:12:39.120941    4110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:12:39.124549    4110 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 02:12:39.124586    4110 kubeadm.go:934] updating node {m04 192.169.0.8 0 v1.31.1 docker false true} ...
	I0917 02:12:39.124645    4110 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:12:39.124713    4110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:12:39.132685    4110 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:12:39.132752    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0917 02:12:39.140189    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 02:12:39.153737    4110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:12:39.167480    4110 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:12:39.170335    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:12:39.180131    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:39.274978    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:12:39.290344    4110 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0917 02:12:39.290539    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:39.312606    4110 out.go:177] * Verifying Kubernetes components...
	I0917 02:12:39.332523    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:39.447567    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:12:39.466307    4110 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:12:39.466524    4110 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x673a720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 02:12:39.466571    4110 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0917 02:12:39.467449    4110 node_ready.go:35] waiting up to 6m0s for node "ha-857000-m04" to be "Ready" ...
	I0917 02:12:39.467568    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:12:39.467575    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.467585    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.467591    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.470632    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:39.969561    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:12:39.969576    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.969585    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.969590    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.972203    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:39.972562    4110 node_ready.go:49] node "ha-857000-m04" has status "Ready":"True"
	I0917 02:12:39.972573    4110 node_ready.go:38] duration metric: took 505.091961ms for node "ha-857000-m04" to be "Ready" ...
	I0917 02:12:39.972579    4110 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:12:39.972614    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:39.972619    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.972625    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.972629    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.976988    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:39.982728    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:39.982773    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:39.982778    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.982795    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.982801    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.985018    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:39.985518    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:39.985526    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.985532    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.985536    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.987300    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:40.482877    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:40.482889    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:40.482894    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:40.482898    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:40.485392    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:40.485952    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:40.485960    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:40.485965    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:40.485972    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:40.487726    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:40.984290    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:40.984330    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:40.984337    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:40.984340    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:40.986636    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:40.987126    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:40.987134    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:40.987140    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:40.987144    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:40.989077    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:41.483798    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:41.483813    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:41.483838    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:41.483842    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:41.485913    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:41.486349    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:41.486357    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:41.486363    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:41.486366    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:41.487997    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:41.984399    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:41.984423    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:41.984436    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:41.984441    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:41.987692    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:41.988563    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:41.988571    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:41.988576    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:41.988580    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:41.990387    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:41.990837    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:42.483597    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:42.483651    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:42.483720    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:42.483731    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:42.486451    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:42.487002    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:42.487009    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:42.487015    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:42.487019    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:42.488735    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:42.984178    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:42.984202    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:42.984244    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:42.984250    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:42.987573    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:42.988040    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:42.988049    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:42.988056    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:42.988060    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:42.989664    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:43.484870    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:43.484884    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:43.484891    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:43.484894    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:43.487141    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:43.487687    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:43.487695    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:43.487701    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:43.487705    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:43.489384    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:43.985004    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:43.985028    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:43.985040    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:43.985047    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:43.988376    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:43.989251    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:43.989258    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:43.989264    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:43.989274    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:43.991010    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:43.991366    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:44.483323    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:44.483341    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:44.483350    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:44.483355    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:44.486151    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:44.486714    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:44.486722    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:44.486727    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:44.486732    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:44.488452    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:44.984530    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:44.984557    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:44.984569    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:44.984574    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:44.987518    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:44.988156    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:44.988163    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:44.988169    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:44.988173    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:44.989906    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:45.484413    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:45.484429    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:45.484436    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:45.484438    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:45.486664    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:45.487158    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:45.487166    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:45.487172    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:45.487180    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:45.488811    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:45.983568    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:45.983588    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:45.983597    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:45.983601    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:45.986094    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:45.986663    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:45.986670    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:45.986676    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:45.986681    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:45.988390    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:46.484237    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:46.484252    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:46.484258    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:46.484262    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:46.486548    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:46.487112    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:46.487120    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:46.487126    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:46.487130    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:46.488764    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:46.489074    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:46.984666    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:46.984685    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:46.984693    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:46.984699    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:46.987277    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:46.987747    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:46.987754    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:46.987760    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:46.987764    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:46.989871    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:47.483189    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:47.483204    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:47.483220    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:47.483225    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:47.485536    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:47.486040    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:47.486048    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:47.486053    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:47.486077    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:47.487968    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:47.983218    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:47.983261    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:47.983271    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:47.983276    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:47.985959    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:47.986467    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:47.986476    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:47.986480    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:47.986483    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:47.988256    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:48.483839    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:48.483855    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:48.483877    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:48.483881    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:48.486127    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:48.486742    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:48.486750    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:48.486756    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:48.486763    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:48.488482    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:48.983104    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:48.983116    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:48.983123    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:48.983126    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:48.986541    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:48.986974    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:48.986982    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:48.986988    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:48.987000    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:48.989572    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:48.989840    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:49.483113    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:49.483127    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:49.483135    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:49.483138    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:49.485418    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:49.485944    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:49.485952    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:49.485958    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:49.485965    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:49.488051    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:49.983392    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:49.983418    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:49.983430    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:49.983435    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:49.990100    4110 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 02:12:49.990521    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:49.990528    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:49.990534    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:49.990551    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:49.995841    4110 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 02:12:50.484489    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:50.484507    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:50.484516    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:50.484519    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:50.487282    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:50.487803    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:50.487815    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:50.487821    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:50.487826    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:50.489538    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:50.984752    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:50.984776    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:50.984788    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:50.984796    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:50.988059    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:50.988580    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:50.988587    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:50.988593    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:50.988597    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:50.990162    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:50.990537    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:51.483827    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:51.483847    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:51.483864    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:51.483902    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:51.487323    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:51.487924    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:51.487932    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:51.487937    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:51.487942    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:51.489844    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:51.983451    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:51.983470    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:51.983482    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:51.983488    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:51.986994    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:51.987525    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:51.987535    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:51.987543    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:51.987548    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:51.989115    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:52.483263    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:52.483288    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:52.483325    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:52.483332    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:52.486347    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:52.486988    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:52.486995    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:52.487001    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:52.487005    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:52.488688    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:52.983765    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:52.983790    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:52.983801    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:52.983810    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:52.986675    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:52.987089    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:52.987119    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:52.987125    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:52.987129    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:52.988627    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:53.484927    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:53.484941    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:53.484948    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:53.484951    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:53.487216    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:53.487660    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:53.487667    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:53.487673    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:53.487676    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:53.489219    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:53.489560    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:53.984242    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:53.984261    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:53.984274    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:53.984280    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:53.986802    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:53.987318    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:53.987326    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:53.987333    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:53.987336    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:53.989152    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:54.483277    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:54.483309    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:54.483353    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:54.483368    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:54.486304    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:54.486703    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:54.486709    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:54.486715    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:54.486718    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:54.488409    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:54.984401    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:54.984421    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:54.984432    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:54.984436    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:54.987150    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:54.987731    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:54.987739    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:54.987745    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:54.987762    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:54.990093    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:55.484219    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:55.484245    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:55.484263    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:55.484270    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:55.487478    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:55.488038    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:55.488046    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:55.488052    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:55.488055    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:55.489736    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:55.490063    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:55.983721    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:55.983738    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:55.983747    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:55.983751    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:55.986467    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:55.986910    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:55.986918    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:55.986924    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:55.986927    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:55.988668    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:56.483680    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:56.483698    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:56.483705    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:56.483708    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:56.486006    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:56.486509    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:56.486517    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:56.486523    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:56.486526    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:56.488267    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:56.984953    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:56.984979    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:56.984991    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:56.984998    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:56.988958    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:56.989556    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:56.989567    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:56.989575    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:56.989580    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:56.991555    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:57.483204    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:57.483220    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.483244    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.483257    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.489651    4110 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 02:12:57.491669    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.491685    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.491693    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.491697    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.500745    4110 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0917 02:12:57.502366    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.502386    4110 pod_ready.go:82] duration metric: took 17.519343583s for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.502398    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.502483    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nl5j5
	I0917 02:12:57.502497    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.502507    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.502512    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.512509    4110 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0917 02:12:57.513793    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.513807    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.513817    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.513823    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.522244    4110 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 02:12:57.522585    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.522595    4110 pod_ready.go:82] duration metric: took 20.190892ms for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.522609    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.522650    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000
	I0917 02:12:57.522656    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.522662    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.522666    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.527526    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:57.528075    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.528084    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.528089    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.528100    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.530647    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:57.531009    4110 pod_ready.go:93] pod "etcd-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.531019    4110 pod_ready.go:82] duration metric: took 8.403704ms for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.531025    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.531068    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m02
	I0917 02:12:57.531073    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.531082    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.531087    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.533324    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:57.533687    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:57.533694    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.533700    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.533704    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.535601    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:57.535875    4110 pod_ready.go:93] pod "etcd-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.535883    4110 pod_ready.go:82] duration metric: took 4.853562ms for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.535902    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.535944    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m03
	I0917 02:12:57.535950    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.535956    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.535960    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.537587    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:57.537964    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:57.537972    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.537978    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.537982    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.539462    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:57.539797    4110 pod_ready.go:93] pod "etcd-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.539805    4110 pod_ready.go:82] duration metric: took 3.894392ms for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.539816    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.684040    4110 request.go:632] Waited for 144.185674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:12:57.684081    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:12:57.684104    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.684125    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.684132    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.686547    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:57.883303    4110 request.go:632] Waited for 196.17665ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.883378    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.883388    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.883398    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.883406    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.886942    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:57.887555    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.887569    4110 pod_ready.go:82] duration metric: took 347.737487ms for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.887576    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.083903    4110 request.go:632] Waited for 196.258589ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:12:58.084076    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:12:58.084095    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.084104    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.084111    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.087323    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:58.284752    4110 request.go:632] Waited for 196.829301ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:58.284841    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:58.284851    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.284863    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.284871    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.287836    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:58.288234    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:58.288243    4110 pod_ready.go:82] duration metric: took 400.655079ms for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.288251    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.484581    4110 request.go:632] Waited for 196.285151ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:58.484627    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:58.484634    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.484670    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.484676    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.487401    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:58.683590    4110 request.go:632] Waited for 195.669934ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:58.683635    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:58.683643    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.683695    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.683709    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.687024    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:58.687397    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:58.687407    4110 pod_ready.go:82] duration metric: took 399.144074ms for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.687414    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.884795    4110 request.go:632] Waited for 197.34012ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:12:58.884845    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:12:58.884854    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.884862    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.884886    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.887327    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:59.083807    4110 request.go:632] Waited for 195.949253ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:59.083945    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:59.083961    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.083973    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.083980    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.087431    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:59.087851    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:59.087864    4110 pod_ready.go:82] duration metric: took 400.438219ms for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.087874    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.283487    4110 request.go:632] Waited for 195.551174ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:12:59.283570    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:12:59.283587    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.283598    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.283604    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.286668    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:59.483240    4110 request.go:632] Waited for 196.050684ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:59.483272    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:59.483277    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.483284    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.483287    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.485481    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:59.485790    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:59.485799    4110 pod_ready.go:82] duration metric: took 397.912163ms for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.485808    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.684196    4110 request.go:632] Waited for 198.346846ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:59.684283    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:59.684289    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.684295    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.684299    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.686349    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:59.883921    4110 request.go:632] Waited for 197.130794ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:59.883972    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:59.883980    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.884030    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.884039    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.888316    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:59.888770    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:59.888788    4110 pod_ready.go:82] duration metric: took 402.964156ms for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.888815    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.083631    4110 request.go:632] Waited for 194.730555ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:13:00.083713    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:13:00.083720    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.083728    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.083732    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.086353    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:00.285261    4110 request.go:632] Waited for 198.400376ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:13:00.285348    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:13:00.285356    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.285364    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.285370    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.287853    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:00.288149    4110 pod_ready.go:93] pod "kube-proxy-528ht" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:00.288159    4110 pod_ready.go:82] duration metric: took 399.322905ms for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.288167    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.484621    4110 request.go:632] Waited for 196.39101ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:13:00.484716    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:13:00.484727    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.484737    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.484744    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.488045    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:00.685321    4110 request.go:632] Waited for 196.686181ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:13:00.685381    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:13:00.685438    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.685455    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.685464    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.688919    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:00.689362    4110 pod_ready.go:93] pod "kube-proxy-g9wxm" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:00.689374    4110 pod_ready.go:82] duration metric: took 401.194339ms for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.689383    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.884950    4110 request.go:632] Waited for 195.521785ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:13:00.884994    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:13:00.885018    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.885025    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.885034    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.887231    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:01.084761    4110 request.go:632] Waited for 197.012037ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:13:01.084795    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:13:01.084800    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.084806    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.084810    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.088892    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:13:01.089243    4110 pod_ready.go:93] pod "kube-proxy-vskbj" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:01.089253    4110 pod_ready.go:82] duration metric: took 399.857039ms for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.089261    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.284602    4110 request.go:632] Waited for 195.290385ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:13:01.284640    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:13:01.284645    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.284672    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.284680    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.286636    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:13:01.483312    4110 request.go:632] Waited for 196.269648ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:13:01.483391    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:13:01.483403    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.483413    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.483434    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.486551    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:01.486934    4110 pod_ready.go:93] pod "kube-proxy-zrqvr" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:01.486943    4110 pod_ready.go:82] duration metric: took 397.670619ms for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.486950    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.683659    4110 request.go:632] Waited for 196.646108ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:13:01.683796    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:13:01.683807    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.683819    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.683825    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.686996    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:01.884224    4110 request.go:632] Waited for 196.55945ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:13:01.884363    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:13:01.884374    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.884385    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.884393    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.888135    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:01.888538    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:01.888551    4110 pod_ready.go:82] duration metric: took 401.588084ms for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.888559    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:02.083387    4110 request.go:632] Waited for 194.732026ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:13:02.083482    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:13:02.083493    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.083503    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.083512    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.087127    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:02.284704    4110 request.go:632] Waited for 197.205174ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:13:02.284756    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:13:02.284761    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.284768    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.284773    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.287752    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:02.288038    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:02.288049    4110 pod_ready.go:82] duration metric: took 399.476957ms for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:02.288056    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:02.485154    4110 request.go:632] Waited for 197.02881ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:13:02.485191    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:13:02.485198    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.485206    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.485211    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.487672    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:02.685336    4110 request.go:632] Waited for 197.331043ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:13:02.685388    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:13:02.685397    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.685411    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.685417    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.688565    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:02.688910    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:02.688918    4110 pod_ready.go:82] duration metric: took 400.85077ms for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:02.688929    4110 pod_ready.go:39] duration metric: took 22.715951136s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:13:02.688942    4110 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 02:13:02.689000    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:13:02.699631    4110 system_svc.go:56] duration metric: took 10.684367ms WaitForService to wait for kubelet
	I0917 02:13:02.699646    4110 kubeadm.go:582] duration metric: took 23.408872965s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:13:02.699663    4110 node_conditions.go:102] verifying NodePressure condition ...
	I0917 02:13:02.884773    4110 request.go:632] Waited for 185.024169ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0917 02:13:02.884858    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0917 02:13:02.884867    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.884878    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.884887    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.888704    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:02.889505    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:13:02.889516    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:13:02.889528    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:13:02.889534    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:13:02.889537    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:13:02.889540    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:13:02.889543    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:13:02.889545    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:13:02.889549    4110 node_conditions.go:105] duration metric: took 189.878189ms to run NodePressure ...
	I0917 02:13:02.889557    4110 start.go:241] waiting for startup goroutines ...
	I0917 02:13:02.889572    4110 start.go:255] writing updated cluster config ...
	I0917 02:13:02.889954    4110 ssh_runner.go:195] Run: rm -f paused
	I0917 02:13:02.930446    4110 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0917 02:13:02.983109    4110 out.go:201] 
	W0917 02:13:03.020673    4110 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0917 02:13:03.057789    4110 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0917 02:13:03.135680    4110 out.go:177] * Done! kubectl is now configured to use "ha-857000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 17 09:12:18 ha-857000 cri-dockerd[1413]: time="2024-09-17T09:12:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc1d198ffe0b277108e5042e64604ceb887f7f17cd5814893fbf789f1ce180f0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.316039322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.316201907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.316216597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.316284213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.356401685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.356591613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.356646706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.356901392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.358210462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.358271414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.358284287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.358347315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.361819988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.361879924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.361892293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.361954784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:48 ha-857000 dockerd[1160]: time="2024-09-17T09:12:48.289404793Z" level=info msg="ignoring event" container=67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 09:12:48 ha-857000 dockerd[1166]: time="2024-09-17T09:12:48.290629069Z" level=info msg="shim disconnected" id=67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7 namespace=moby
	Sep 17 09:12:48 ha-857000 dockerd[1166]: time="2024-09-17T09:12:48.290966877Z" level=warning msg="cleaning up after shim disconnected" id=67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7 namespace=moby
	Sep 17 09:12:48 ha-857000 dockerd[1166]: time="2024-09-17T09:12:48.291008241Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 09:13:00 ha-857000 dockerd[1166]: time="2024-09-17T09:13:00.269678049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:13:00 ha-857000 dockerd[1166]: time="2024-09-17T09:13:00.269745426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:13:00 ha-857000 dockerd[1166]: time="2024-09-17T09:13:00.269758363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:13:00 ha-857000 dockerd[1166]: time="2024-09-17T09:13:00.269841312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d940d576a500a       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   6fb8068a5c29f       storage-provisioner
	119f2deb32f13       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   fc1d198ffe0b2       busybox-7dff88458-4jzg8
	b7aa83ae3a822       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   f4e7a7b3c65e5       coredns-7c65d6cfc9-nl5j5
	c37a677e31180       60c005f310ff3                                                                                         2 minutes ago        Running             kube-proxy                1                   5294422217d99       kube-proxy-vskbj
	3d889c7c8da7e       12968670680f4                                                                                         2 minutes ago        Running             kindnet-cni               1                   80326e6e99372       kindnet-7pf7v
	7b8b62bf7340c       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   f4cf87ea66207       coredns-7c65d6cfc9-fg65r
	67814a4514b10       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   6fb8068a5c29f       storage-provisioner
	ca7fe8ccd4c53       175ffd71cce3d                                                                                         2 minutes ago        Running             kube-controller-manager   6                   77f536a07a3a6       kube-controller-manager-ha-857000
	475dedee37228       6bab7719df100                                                                                         3 minutes ago        Running             kube-apiserver            6                   0968090389d54       kube-apiserver-ha-857000
	37d6d6479e30b       38af8ddebf499                                                                                         3 minutes ago        Running             kube-vip                  1                   2842ed202c474       kube-vip-ha-857000
	00ff29c213716       9aa1fad941575                                                                                         3 minutes ago        Running             kube-scheduler            2                   309841a63d772       kube-scheduler-ha-857000
	13b7f8a93ad49       175ffd71cce3d                                                                                         3 minutes ago        Exited              kube-controller-manager   5                   77f536a07a3a6       kube-controller-manager-ha-857000
	8c0804e78de8f       2e96e5913fc06                                                                                         3 minutes ago        Running             etcd                      2                   6cfb11ed1d6ba       etcd-ha-857000
	a18a6b023cd60       6bab7719df100                                                                                         3 minutes ago        Exited              kube-apiserver            5                   0968090389d54       kube-apiserver-ha-857000
	034279696db8f       38af8ddebf499                                                                                         8 minutes ago        Exited              kube-vip                  0                   4205e70bfa1bb       kube-vip-ha-857000
	d9fae1497b048       9aa1fad941575                                                                                         8 minutes ago        Exited              kube-scheduler            1                   37d9fe68f2e59       kube-scheduler-ha-857000
	f4f59b8c76404       2e96e5913fc06                                                                                         8 minutes ago        Exited              etcd                      1                   a23094a650513       etcd-ha-857000
	fe908ac73b00f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   10 minutes ago       Exited              busybox                   0                   80864159ef38e       busybox-7dff88458-4jzg8
	521527f17691c       c69fa2e9cbf5f                                                                                         13 minutes ago       Exited              coredns                   0                   aa21641a5b16e       coredns-7c65d6cfc9-nl5j5
	f991c8e956d90       c69fa2e9cbf5f                                                                                         13 minutes ago       Exited              coredns                   0                   da08087b51cd9       coredns-7c65d6cfc9-fg65r
	5d84a01abd3e7       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              13 minutes ago       Exited              kindnet-cni               0                   38db6fab73655       kindnet-7pf7v
	0b03e5e488939       60c005f310ff3                                                                                         13 minutes ago       Exited              kube-proxy                0                   067bc1b2ad7fa       kube-proxy-vskbj
	
	
	==> coredns [521527f17691] <==
	[INFO] 10.244.2.2:33230 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100028s
	[INFO] 10.244.2.2:37727 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077614s
	[INFO] 10.244.2.2:51233 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090375s
	[INFO] 10.244.1.2:43082 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115984s
	[INFO] 10.244.1.2:45048 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000071244s
	[INFO] 10.244.1.2:48877 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106601s
	[INFO] 10.244.1.2:59235 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000068348s
	[INFO] 10.244.1.2:53808 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064222s
	[INFO] 10.244.1.2:54982 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064992s
	[INFO] 10.244.0.4:59177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012236s
	[INFO] 10.244.0.4:59468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096608s
	[INFO] 10.244.0.4:49953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108018s
	[INFO] 10.244.2.2:36658 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081427s
	[INFO] 10.244.1.2:53166 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140458s
	[INFO] 10.244.1.2:60442 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069729s
	[INFO] 10.244.0.4:60564 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007076s
	[INFO] 10.244.0.4:57696 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000125726s
	[INFO] 10.244.2.2:33447 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114855s
	[INFO] 10.244.2.2:49647 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000058138s
	[INFO] 10.244.2.2:55869 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00009725s
	[INFO] 10.244.1.2:49826 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096631s
	[INFO] 10.244.1.2:33376 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000046366s
	[INFO] 10.244.1.2:47184 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000084276s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7b8b62bf7340] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40424 - 46793 "HINFO IN 2652948645074262826.4033840954787183129. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019948501s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[345670875]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.718) (total time: 30000ms):
	Trace[345670875]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (09:12:48.718)
	Trace[345670875]: [30.000647992s] [30.000647992s] END
	[INFO] plugin/kubernetes: Trace[990255223]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.716) (total time: 30002ms):
	Trace[990255223]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (09:12:48.718)
	Trace[990255223]: [30.002122547s] [30.002122547s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1561533284]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.716) (total time: 30004ms):
	Trace[1561533284]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (09:12:48.720)
	Trace[1561533284]: [30.004471134s] [30.004471134s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [b7aa83ae3a82] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48468 - 41934 "HINFO IN 5248560894606224369.8303849678443807322. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019682687s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[134011415]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.720) (total time: 30000ms):
	Trace[134011415]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (09:12:48.721)
	Trace[134011415]: [30.000772699s] [30.000772699s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1931337556]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.720) (total time: 30001ms):
	Trace[1931337556]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (09:12:48.721)
	Trace[1931337556]: [30.001621273s] [30.001621273s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2093896532]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.720) (total time: 30001ms):
	Trace[2093896532]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (09:12:48.721)
	Trace[2093896532]: [30.001436763s] [30.001436763s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [f991c8e956d9] <==
	[INFO] 10.244.1.2:36169 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206963s
	[INFO] 10.244.1.2:33814 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000088589s
	[INFO] 10.244.1.2:57385 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.000535008s
	[INFO] 10.244.0.4:54856 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135529s
	[INFO] 10.244.0.4:47831 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.019088159s
	[INFO] 10.244.0.4:46325 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201714s
	[INFO] 10.244.0.4:45239 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000255383s
	[INFO] 10.244.0.4:55042 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000141827s
	[INFO] 10.244.2.2:47888 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108401s
	[INFO] 10.244.2.2:41486 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00044994s
	[INFO] 10.244.2.2:50623 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082841s
	[INFO] 10.244.1.2:54143 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098038s
	[INFO] 10.244.1.2:38802 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046632s
	[INFO] 10.244.0.4:39532 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.002579505s
	[INFO] 10.244.2.2:53978 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077749s
	[INFO] 10.244.2.2:60710 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092889s
	[INFO] 10.244.2.2:51255 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000044117s
	[INFO] 10.244.1.2:36996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056219s
	[INFO] 10.244.1.2:39487 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090704s
	[INFO] 10.244.0.4:39831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131192s
	[INFO] 10.244.0.4:35770 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154922s
	[INFO] 10.244.2.2:45820 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113973s
	[INFO] 10.244.1.2:44519 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120184s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-857000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-857000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=ha-857000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T02_00_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:00:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:14:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:00:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:00:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:00:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:01:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-857000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 54854ca4cf93431694d9ad27a68ef89d
	  System UUID:                f6fb40b6-0000-0000-91c0-dbf4ea1b682c
	  Boot ID:                    a1af0517-f4c2-4eae-96db-f7479d049a6a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4jzg8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-fg65r             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-nl5j5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-857000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-7pf7v                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-857000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-857000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-vskbj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-857000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-857000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m15s                  kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  NodeHasSufficientPID     13m                    kubelet          Node ha-857000 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                    kubelet          Node ha-857000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                    kubelet          Node ha-857000 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           13m                    node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  NodeReady                13m                    kubelet          Node ha-857000 status is now: NodeReady
	  Normal  RegisteredNode           12m                    node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  RegisteredNode           9m13s                  node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  Starting                 3m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s (x8 over 3m49s)  kubelet          Node ha-857000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s (x8 over 3m49s)  kubelet          Node ha-857000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s (x7 over 3m49s)  kubelet          Node ha-857000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m55s                  node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  RegisteredNode           2m54s                  node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  RegisteredNode           2m26s                  node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  RegisteredNode           24s                    node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	
	
	Name:               ha-857000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-857000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=ha-857000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T02_01_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:01:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:14:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:01:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:01:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:01:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:11:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-857000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 39fe1ffb0a9e4afb9fa3c09c6b13fed7
	  System UUID:                19404b28-0000-0000-842d-d4858a62cbd3
	  Boot ID:                    625329b0-bed9-4da5-90fd-2859c5b852dc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mhjf6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-857000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-vh2h2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-857000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-857000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-zrqvr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-857000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-857000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 2m51s                kube-proxy       
	  Normal   Starting                 9m17s                kube-proxy       
	  Normal   Starting                 12m                  kube-proxy       
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-857000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-857000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-857000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           12m                  node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   RegisteredNode           12m                  node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   RegisteredNode           11m                  node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Warning  Rebooted                 9m21s                kubelet          Node ha-857000-m02 has been rebooted, boot id: b4c87c19-d878-45a1-b0c5-442ae4d2861b
	  Normal   NodeHasSufficientPID     9m21s                kubelet          Node ha-857000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    9m21s                kubelet          Node ha-857000-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 9m21s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9m21s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m21s                kubelet          Node ha-857000-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           9m13s                node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   Starting                 3m7s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m7s (x8 over 3m7s)  kubelet          Node ha-857000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m7s (x8 over 3m7s)  kubelet          Node ha-857000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m7s (x7 over 3m7s)  kubelet          Node ha-857000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m55s                node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   RegisteredNode           2m54s                node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   RegisteredNode           2m26s                node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   RegisteredNode           24s                  node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	
	
	Name:               ha-857000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-857000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=ha-857000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T02_03_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:03:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:14:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:12:01 +0000   Tue, 17 Sep 2024 09:03:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:12:01 +0000   Tue, 17 Sep 2024 09:03:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:12:01 +0000   Tue, 17 Sep 2024 09:03:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:12:01 +0000   Tue, 17 Sep 2024 09:03:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-857000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 69dae176c7914316a8660d135e30666c
	  System UUID:                3d8f47ea-0000-0000-a80b-a24a99cad96e
	  Boot ID:                    e1620cd8-3a62-4426-a27e-eeaf7b39756d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5x9l8                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-857000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-vc6z5                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-857000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-857000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-g9wxm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-857000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-857000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m30s              kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-857000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-857000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-857000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           9m13s              node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           2m55s              node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           2m54s              node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   Starting                 2m34s              kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m33s              kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m33s              kubelet          Node ha-857000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s              kubelet          Node ha-857000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s              kubelet          Node ha-857000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m33s              kubelet          Node ha-857000-m03 has been rebooted, boot id: e1620cd8-3a62-4426-a27e-eeaf7b39756d
	  Normal   RegisteredNode           2m26s              node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           24s                node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	
	
	Name:               ha-857000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-857000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=ha-857000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T02_04_06_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:04:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:14:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:12:39 +0000   Tue, 17 Sep 2024 09:12:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:12:39 +0000   Tue, 17 Sep 2024 09:12:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:12:39 +0000   Tue, 17 Sep 2024 09:12:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:12:39 +0000   Tue, 17 Sep 2024 09:12:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-857000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 15c3f15f82fe4af0a76f2083dcf53a13
	  System UUID:                32bc423b-0000-0000-90a4-5417ea5ec912
	  Boot ID:                    cd10fc3d-989b-457a-8925-881b38fed37e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4jk9v       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-528ht    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 113s                 kube-proxy       
	  Normal   Starting                 10m                  kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)    kubelet          Node ha-857000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)    kubelet          Node ha-857000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)    kubelet          Node ha-857000-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                  node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           10m                  node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           10m                  node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   NodeReady                10m                  kubelet          Node ha-857000-m04 status is now: NodeReady
	  Normal   RegisteredNode           9m13s                node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           2m55s                node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           2m54s                node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           2m26s                node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   NodeNotReady             2m15s                node-controller  Node ha-857000-m04 status is now: NodeNotReady
	  Normal   Starting                 115s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  115s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  115s (x3 over 115s)  kubelet          Node ha-857000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    115s (x3 over 115s)  kubelet          Node ha-857000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     115s (x3 over 115s)  kubelet          Node ha-857000-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 115s (x2 over 115s)  kubelet          Node ha-857000-m04 has been rebooted, boot id: cd10fc3d-989b-457a-8925-881b38fed37e
	  Normal   NodeReady                115s (x2 over 115s)  kubelet          Node ha-857000-m04 status is now: NodeReady
	  Normal   RegisteredNode           24s                  node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	
	
	Name:               ha-857000-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-857000-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=ha-857000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T02_14_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:14:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857000-m05
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:14:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:14:33 +0000   Tue, 17 Sep 2024 09:14:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:14:33 +0000   Tue, 17 Sep 2024 09:14:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:14:33 +0000   Tue, 17 Sep 2024 09:14:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:14:33 +0000   Tue, 17 Sep 2024 09:14:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.9
	  Hostname:    ha-857000-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 2db6cfb0d1434c14b519f27d6d4511fd
	  System UUID:                ee9442ef-0000-0000-9576-64d480b59214
	  Boot ID:                    81080519-7f3f-4191-9d49-cd7fa64b5401
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-857000-m05                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         30s
	  kube-system                 kindnet-dmlfn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      32s
	  kube-system                 kube-apiserver-ha-857000-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-ha-857000-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-6dtwp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-ha-857000-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-vip-ha-857000-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  NodeAllocatableEnforced  33s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  32s (x8 over 33s)  kubelet          Node ha-857000-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s (x8 over 33s)  kubelet          Node ha-857000-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s (x7 over 33s)  kubelet          Node ha-857000-m05 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node ha-857000-m05 event: Registered Node ha-857000-m05 in Controller
	  Normal  RegisteredNode           29s                node-controller  Node ha-857000-m05 event: Registered Node ha-857000-m05 in Controller
	  Normal  RegisteredNode           29s                node-controller  Node ha-857000-m05 event: Registered Node ha-857000-m05 in Controller
	  Normal  RegisteredNode           24s                node-controller  Node ha-857000-m05 event: Registered Node ha-857000-m05 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035828] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007970] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.690889] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006763] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.660573] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.226234] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.530337] systemd-fstab-generator[461]: Ignoring "noauto" option for root device
	[  +0.102427] systemd-fstab-generator[473]: Ignoring "noauto" option for root device
	[  +1.905407] systemd-fstab-generator[1088]: Ignoring "noauto" option for root device
	[  +0.264183] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +0.055811] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.051134] systemd-fstab-generator[1138]: Ignoring "noauto" option for root device
	[  +0.114709] systemd-fstab-generator[1152]: Ignoring "noauto" option for root device
	[  +2.420834] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.093862] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	[  +0.101457] systemd-fstab-generator[1390]: Ignoring "noauto" option for root device
	[  +0.112591] systemd-fstab-generator[1405]: Ignoring "noauto" option for root device
	[  +0.460313] systemd-fstab-generator[1565]: Ignoring "noauto" option for root device
	[  +6.769000] kauditd_printk_skb: 212 callbacks suppressed
	[Sep17 09:11] kauditd_printk_skb: 40 callbacks suppressed
	[Sep17 09:12] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [8c0804e78de8] <==
	{"level":"info","ts":"2024-09-17T09:12:03.647767Z","caller":"traceutil/trace.go:171","msg":"trace[1450641964] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1898; }","duration":"121.988853ms","start":"2024-09-17T09:12:03.525765Z","end":"2024-09-17T09:12:03.647754Z","steps":["trace[1450641964] 'process raft request'  (duration: 121.923204ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T09:13:08.579639Z","caller":"traceutil/trace.go:171","msg":"trace[2135392401] transaction","detail":"{read_only:false; response_revision:2205; number_of_response:1; }","duration":"108.477653ms","start":"2024-09-17T09:13:08.471150Z","end":"2024-09-17T09:13:08.579628Z","steps":["trace[2135392401] 'process raft request'  (duration: 108.403212ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T09:14:02.562336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(5207222418258591927 13314548521573537860 18406437859275119615) learners=(12916380725732009237)"}
	{"level":"info","ts":"2024-09-17T09:14:02.562708Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"b34033d60cf56515","added-peer-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2024-09-17T09:14:02.562854Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:02.563024Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:02.564080Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:02.564408Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:02.564727Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:02.564003Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:02.565427Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515","remote-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2024-09-17T09:14:02.565597Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515"}
	{"level":"warn","ts":"2024-09-17T09:14:02.611936Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"b34033d60cf56515","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-09-17T09:14:03.606134Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"b34033d60cf56515","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-09-17T09:14:03.701399Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:03.721014Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:03.731438Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:03.741642Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"b34033d60cf56515","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-17T09:14:03.741683Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:03.789570Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"b34033d60cf56515","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-17T09:14:03.789710Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515"}
	{"level":"warn","ts":"2024-09-17T09:14:04.113077Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"b34033d60cf56515","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-09-17T09:14:04.609085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(5207222418258591927 12916380725732009237 13314548521573537860 18406437859275119615)"}
	{"level":"info","ts":"2024-09-17T09:14:04.609458Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-09-17T09:14:04.609879Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"b34033d60cf56515"}
	
	
	==> etcd [f4f59b8c7640] <==
	{"level":"info","ts":"2024-09-17T09:10:21.875702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:21.875849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:21.875892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:21.875927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:21.875956Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"warn","ts":"2024-09-17T09:10:23.692511Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275704,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:10:24.194017Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275704,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:10:24.278276Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ff70cdb626651bff","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:10:24.278324Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ff70cdb626651bff","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:10:24.301258Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-17T09:10:24.301488Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: no route to host"}
	{"level":"info","ts":"2024-09-17T09:10:24.470887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:24.470976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:24.470995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:24.471013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:24.471022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"warn","ts":"2024-09-17T09:10:24.694867Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275704,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:10:24.938557Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.746471868s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T09:10:24.938607Z","caller":"traceutil/trace.go:171","msg":"trace[802347161] range","detail":"{range_begin:; range_end:; }","duration":"1.746534049s","start":"2024-09-17T09:10:23.192066Z","end":"2024-09-17T09:10:24.938600Z","steps":["trace[802347161] 'agreement among raft nodes before linearized reading'  (duration: 1.746469617s)"],"step_count":1}
	{"level":"error","ts":"2024-09-17T09:10:24.938646Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: context canceled\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 09:14:35 up 4 min,  0 users,  load average: 0.54, 0.40, 0.17
	Linux ha-857000 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3d889c7c8da7] <==
	I0917 09:14:09.604353       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:14:09.604381       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:14:09.604532       1 main.go:295] Handling node with IPs: map[192.169.0.9:{}]
	I0917 09:14:09.604563       1 main.go:322] Node ha-857000-m05 has CIDR [10.244.4.0/24] 
	I0917 09:14:09.604675       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 192.169.0.9 Flags: [] Table: 0} 
	I0917 09:14:19.604061       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:14:19.604207       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:14:19.604390       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:14:19.604514       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:14:19.604612       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:14:19.604686       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:14:19.604765       1 main.go:295] Handling node with IPs: map[192.169.0.9:{}]
	I0917 09:14:19.604809       1 main.go:322] Node ha-857000-m05 has CIDR [10.244.4.0/24] 
	I0917 09:14:19.604913       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:14:19.604966       1 main.go:299] handling current node
	I0917 09:14:29.603812       1 main.go:295] Handling node with IPs: map[192.169.0.9:{}]
	I0917 09:14:29.603909       1 main.go:322] Node ha-857000-m05 has CIDR [10.244.4.0/24] 
	I0917 09:14:29.604115       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:14:29.604276       1 main.go:299] handling current node
	I0917 09:14:29.604421       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:14:29.604738       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:14:29.604890       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:14:29.604901       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:14:29.605039       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:14:29.605115       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [5d84a01abd3e] <==
	I0917 09:05:22.964948       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:32.966280       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:32.966503       1 main.go:299] handling current node
	I0917 09:05:32.966605       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:32.966739       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:32.966951       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:32.967059       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:32.967333       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:32.967449       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:42.964585       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:42.964999       1 main.go:299] handling current node
	I0917 09:05:42.965252       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:42.965422       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:42.965746       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:42.965829       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:42.966204       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:42.966357       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:52.965279       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:52.965376       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:52.965533       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:52.965592       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:52.965673       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:52.965753       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:52.965812       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:52.965902       1 main.go:299] handling current node
	
	
	==> kube-apiserver [475dedee3722] <==
	I0917 09:11:36.333360       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 09:11:36.335609       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 09:11:36.383731       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 09:11:36.383763       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 09:11:36.384428       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 09:11:36.385090       1 shared_informer.go:320] Caches are synced for configmaps
	I0917 09:11:36.385168       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0917 09:11:36.385606       1 aggregator.go:171] initial CRD sync complete...
	I0917 09:11:36.385745       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 09:11:36.386077       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 09:11:36.386187       1 cache.go:39] Caches are synced for autoregister controller
	I0917 09:11:36.388938       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 09:11:36.396198       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 09:11:36.396611       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0917 09:11:36.396812       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	W0917 09:11:36.438133       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0917 09:11:36.461867       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0917 09:11:36.465355       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 09:11:36.465387       1 policy_source.go:224] refreshing policies
	I0917 09:11:36.484251       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 09:11:36.540432       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 09:11:36.548136       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0917 09:11:36.554355       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0917 09:11:37.296848       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0917 09:11:37.666999       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	
	
	==> kube-apiserver [a18a6b023cd6] <==
	I0917 09:10:52.375949       1 options.go:228] external host was not specified, using 192.169.0.5
	I0917 09:10:52.377617       1 server.go:142] Version: v1.31.1
	I0917 09:10:52.377684       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:10:52.824178       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0917 09:10:52.824356       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 09:10:52.826684       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0917 09:10:52.828510       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0917 09:10:52.829505       1 instance.go:232] Using reconciler: lease
	W0917 09:11:12.810788       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0917 09:11:12.813364       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0917 09:11:12.831731       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0917 09:11:12.831919       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [13b7f8a93ad4] <==
	I0917 09:10:53.058887       1 serving.go:386] Generated self-signed cert in-memory
	I0917 09:10:53.469010       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0917 09:10:53.469133       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:10:53.478660       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 09:10:53.478827       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 09:10:53.478677       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0917 09:10:53.479256       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0917 09:11:13.838538       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-controller-manager [ca7fe8ccd4c5] <==
	E0917 09:14:01.952809       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-t7q9m failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-t7q9m\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0917 09:14:02.067848       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-857000-m04"
	I0917 09:14:02.069752       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-857000-m05\" does not exist"
	I0917 09:14:02.082951       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-857000-m05" podCIDRs=["10.244.4.0/24"]
	I0917 09:14:02.082992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:02.083012       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:02.131285       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:02.527608       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:03.151798       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:04.753606       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:05.036390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:05.037599       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-857000-m05"
	I0917 09:14:05.051631       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:05.440757       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:05.531203       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:05.620111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:05.644082       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:10.280630       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:10.374858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:12.509235       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:23.788435       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:23.789949       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-857000-m04"
	I0917 09:14:23.799441       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:25.050665       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:33.028322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	
	
	==> kube-proxy [0b03e5e48893] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 09:00:59.069869       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 09:00:59.079118       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 09:00:59.079199       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 09:00:59.109184       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 09:00:59.109227       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 09:00:59.109245       1 server_linux.go:169] "Using iptables Proxier"
	I0917 09:00:59.111661       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 09:00:59.111847       1 server.go:483] "Version info" version="v1.31.1"
	I0917 09:00:59.111876       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:00:59.112952       1 config.go:199] "Starting service config controller"
	I0917 09:00:59.112979       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 09:00:59.112995       1 config.go:105] "Starting endpoint slice config controller"
	I0917 09:00:59.112998       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 09:00:59.113603       1 config.go:328] "Starting node config controller"
	I0917 09:00:59.113673       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 09:00:59.213587       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 09:00:59.213649       1 shared_informer.go:320] Caches are synced for service config
	I0917 09:00:59.213808       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c37a677e3118] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 09:12:19.054558       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 09:12:19.080090       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 09:12:19.080297       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 09:12:19.208559       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 09:12:19.208589       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 09:12:19.208607       1 server_linux.go:169] "Using iptables Proxier"
	I0917 09:12:19.212603       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 09:12:19.213076       1 server.go:483] "Version info" version="v1.31.1"
	I0917 09:12:19.213105       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:12:19.216919       1 config.go:199] "Starting service config controller"
	I0917 09:12:19.217067       1 config.go:105] "Starting endpoint slice config controller"
	I0917 09:12:19.217988       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 09:12:19.218116       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 09:12:19.228165       1 config.go:328] "Starting node config controller"
	I0917 09:12:19.228196       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 09:12:19.319175       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 09:12:19.319361       1 shared_informer.go:320] Caches are synced for service config
	I0917 09:12:19.328396       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [00ff29c21371] <==
	W0917 09:11:36.381567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 09:11:36.381612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.381875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 09:11:36.382055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.382318       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 09:11:36.382353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.382449       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 09:11:36.382484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.382718       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 09:11:36.382767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.382890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 09:11:36.383104       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0917 09:11:36.446439       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0917 09:14:02.163499       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dmlfn\": pod kindnet-dmlfn is already assigned to node \"ha-857000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-dmlfn" node="ha-857000-m05"
	E0917 09:14:02.163933       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e206acfa-4993-496a-9e1d-16406007660e(kube-system/kindnet-dmlfn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-dmlfn"
	E0917 09:14:02.164397       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dmlfn\": pod kindnet-dmlfn is already assigned to node \"ha-857000-m05\"" pod="kube-system/kindnet-dmlfn"
	E0917 09:14:02.164245       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mt6p5\": pod kindnet-mt6p5 is already assigned to node \"ha-857000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-mt6p5" node="ha-857000-m05"
	E0917 09:14:02.164750       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b68ac5c1-1d8b-4e95-a0d7-298a99ba43ae(kube-system/kindnet-mt6p5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mt6p5"
	E0917 09:14:02.164931       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mt6p5\": pod kindnet-mt6p5 is already assigned to node \"ha-857000-m05\"" pod="kube-system/kindnet-mt6p5"
	I0917 09:14:02.165129       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mt6p5" node="ha-857000-m05"
	I0917 09:14:02.165842       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dmlfn" node="ha-857000-m05"
	E0917 09:14:02.164280       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-gblm4\": pod kube-proxy-gblm4 is already assigned to node \"ha-857000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-gblm4" node="ha-857000-m05"
	E0917 09:14:02.166155       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 277d386e-b69f-4f54-9864-a58175d4f372(kube-system/kube-proxy-gblm4) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-gblm4"
	E0917 09:14:02.174773       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-gblm4\": pod kube-proxy-gblm4 is already assigned to node \"ha-857000-m05\"" pod="kube-system/kube-proxy-gblm4"
	I0917 09:14:02.175446       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-gblm4" node="ha-857000-m05"
	
	
	==> kube-scheduler [d9fae1497b04] <==
	E0917 09:09:54.047035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:01.417081       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:01.417178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:02.586956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:02.587049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:09.339944       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:09.340160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:12.375946       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:12.375997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:14.579545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:14.579979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:18.357149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:18.357192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:19.971293       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:19.971663       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:22.259174       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:22.259229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:24.413900       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:24.413975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	I0917 09:10:24.953479       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 09:10:24.953762       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0917 09:10:24.953909       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	E0917 09:10:24.953957       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	I0917 09:10:24.955052       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 09:10:24.955061       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.363896    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eecd1421-3a2f-4e48-b2b2-abcbef7869e7-cni-cfg\") pod \"kindnet-7pf7v\" (UID: \"eecd1421-3a2f-4e48-b2b2-abcbef7869e7\") " pod="kube-system/kindnet-7pf7v"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.363942    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a396757-8954-48d2-b708-dcdfbab21dc7-xtables-lock\") pod \"kube-proxy-vskbj\" (UID: \"7a396757-8954-48d2-b708-dcdfbab21dc7\") " pod="kube-system/kube-proxy-vskbj"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.363979    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a396757-8954-48d2-b708-dcdfbab21dc7-lib-modules\") pod \"kube-proxy-vskbj\" (UID: \"7a396757-8954-48d2-b708-dcdfbab21dc7\") " pod="kube-system/kube-proxy-vskbj"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.364021    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d81e7b55-a14e-4dc7-9193-ebe6914cdacf-tmp\") pod \"storage-provisioner\" (UID: \"d81e7b55-a14e-4dc7-9193-ebe6914cdacf\") " pod="kube-system/storage-provisioner"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.381710    1572 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 17 09:12:18 ha-857000 kubelet[1572]: I0917 09:12:18.732394    1572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc1d198ffe0b277108e5042e64604ceb887f7f17cd5814893fbf789f1ce180f0"
	Sep 17 09:12:18 ha-857000 kubelet[1572]: I0917 09:12:18.754870    1572 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-857000" podUID="84b805d8-9a8f-4c6f-b18f-76c98ca4776c"
	Sep 17 09:12:18 ha-857000 kubelet[1572]: I0917 09:12:18.779039    1572 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-857000"
	Sep 17 09:12:19 ha-857000 kubelet[1572]: I0917 09:12:19.228668    1572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ca8e5543181b6f9996b6d7e435c3947" path="/var/lib/kubelet/pods/3ca8e5543181b6f9996b6d7e435c3947/volumes"
	Sep 17 09:12:19 ha-857000 kubelet[1572]: I0917 09:12:19.846405    1572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-857000" podStartSLOduration=1.846388448 podStartE2EDuration="1.846388448s" podCreationTimestamp="2024-09-17 09:12:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-17 09:12:19.829429782 +0000 UTC m=+94.772487592" watchObservedRunningTime="2024-09-17 09:12:19.846388448 +0000 UTC m=+94.789446258"
	Sep 17 09:12:45 ha-857000 kubelet[1572]: E0917 09:12:45.245854    1572 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 09:12:45 ha-857000 kubelet[1572]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 09:12:45 ha-857000 kubelet[1572]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 09:12:45 ha-857000 kubelet[1572]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 09:12:45 ha-857000 kubelet[1572]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 09:12:45 ha-857000 kubelet[1572]: I0917 09:12:45.363926    1572 scope.go:117] "RemoveContainer" containerID="fcb7038a6ac9ef515ab38df1dab73586ab93030767bab4f0d4d141f34bac886f"
	Sep 17 09:12:49 ha-857000 kubelet[1572]: I0917 09:12:49.092301    1572 scope.go:117] "RemoveContainer" containerID="611759af4bf7a8b48c2739f53afaeba3cb10af70a80bf85bfb78eebe6230c491"
	Sep 17 09:12:49 ha-857000 kubelet[1572]: I0917 09:12:49.092548    1572 scope.go:117] "RemoveContainer" containerID="67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7"
	Sep 17 09:12:49 ha-857000 kubelet[1572]: E0917 09:12:49.092633    1572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d81e7b55-a14e-4dc7-9193-ebe6914cdacf)\"" pod="kube-system/storage-provisioner" podUID="d81e7b55-a14e-4dc7-9193-ebe6914cdacf"
	Sep 17 09:13:00 ha-857000 kubelet[1572]: I0917 09:13:00.226410    1572 scope.go:117] "RemoveContainer" containerID="67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7"
	Sep 17 09:13:45 ha-857000 kubelet[1572]: E0917 09:13:45.246174    1572 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 09:13:45 ha-857000 kubelet[1572]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 09:13:45 ha-857000 kubelet[1572]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 09:13:45 ha-857000 kubelet[1572]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 09:13:45 ha-857000 kubelet[1572]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-857000 -n ha-857000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-857000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (84.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:304: expected profile "ha-857000" in json of 'profile list' to include 4 nodes but have 5 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-857000\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-857000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-857000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersion\":
\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.169.0.9\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"i
ngress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608
000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-857000 -n ha-857000
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-857000 logs -n 25: (3.634146964s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m04 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m03_ha-857000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-857000 cp testdata/cp-test.txt                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3490570692/001/cp-test_ha-857000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000:/home/docker/cp-test_ha-857000-m04_ha-857000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000 sudo cat                                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m02:/home/docker/cp-test_ha-857000-m04_ha-857000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m02 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m03:/home/docker/cp-test_ha-857000-m04_ha-857000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | ha-857000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-857000 ssh -n ha-857000-m03 sudo cat                                                                                      | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | /home/docker/cp-test_ha-857000-m04_ha-857000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-857000 node stop m02 -v=7                                                                                                 | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-857000 node start m02 -v=7                                                                                                | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:04 PDT | 17 Sep 24 02:05 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-857000 -v=7                                                                                                       | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:05 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-857000 -v=7                                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:05 PDT | 17 Sep 24 02:06 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-857000 --wait=true -v=7                                                                                                | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:06 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-857000                                                                                                            | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:07 PDT |                     |
	| node    | ha-857000 node delete m03 -v=7                                                                                               | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:07 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-857000 stop -v=7                                                                                                          | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:07 PDT | 17 Sep 24 02:10 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-857000 --wait=true                                                                                                     | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:10 PDT | 17 Sep 24 02:13 PDT |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-857000                                                                                                             | ha-857000 | jenkins | v1.34.0 | 17 Sep 24 02:13 PDT | 17 Sep 24 02:14 PDT |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 02:10:27
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 02:10:27.105477    4110 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:10:27.105665    4110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:10:27.105670    4110 out.go:358] Setting ErrFile to fd 2...
	I0917 02:10:27.105674    4110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:10:27.105845    4110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:10:27.107332    4110 out.go:352] Setting JSON to false
	I0917 02:10:27.130053    4110 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2397,"bootTime":1726561830,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 02:10:27.130205    4110 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:10:27.152188    4110 out.go:177] * [ha-857000] minikube v1.34.0 on Darwin 14.6.1
	I0917 02:10:27.194040    4110 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:10:27.194117    4110 notify.go:220] Checking for updates...
	I0917 02:10:27.238575    4110 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:10:27.259736    4110 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 02:10:27.280930    4110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:10:27.301762    4110 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:10:27.322633    4110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:10:27.344421    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:10:27.344920    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:27.344973    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:27.354413    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52088
	I0917 02:10:27.354771    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:27.355142    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:10:27.355153    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:27.355356    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:27.355460    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:27.355684    4110 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:10:27.355976    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:27.356005    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:27.364420    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52090
	I0917 02:10:27.364811    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:27.365167    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:10:27.365180    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:27.365391    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:27.365504    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:27.393706    4110 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 02:10:27.435894    4110 start.go:297] selected driver: hyperkit
	I0917 02:10:27.435922    4110 start.go:901] validating driver "hyperkit" against &{Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:10:27.436195    4110 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:10:27.436329    4110 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:10:27.436542    4110 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19648-1025/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 02:10:27.445831    4110 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 02:10:27.449537    4110 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:27.449556    4110 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 02:10:27.452252    4110 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:10:27.452291    4110 cni.go:84] Creating CNI manager for ""
	I0917 02:10:27.452327    4110 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 02:10:27.452403    4110 start.go:340] cluster config:
	{Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:10:27.452523    4110 iso.go:125] acquiring lock: {Name:mkc407d80a0f8e78f0e24d63c464bb315c80139b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:10:27.494874    4110 out.go:177] * Starting "ha-857000" primary control-plane node in "ha-857000" cluster
	I0917 02:10:27.515806    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:10:27.515897    4110 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 02:10:27.515918    4110 cache.go:56] Caching tarball of preloaded images
	I0917 02:10:27.516138    4110 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:10:27.516158    4110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:10:27.516383    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:27.517269    4110 start.go:360] acquireMachinesLock for ha-857000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:10:27.517388    4110 start.go:364] duration metric: took 96.177µs to acquireMachinesLock for "ha-857000"
	I0917 02:10:27.517441    4110 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:10:27.517460    4110 fix.go:54] fixHost starting: 
	I0917 02:10:27.517898    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:27.517930    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:27.526784    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52092
	I0917 02:10:27.527129    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:27.527462    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:10:27.527473    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:27.527739    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:27.527880    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:27.527995    4110 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:10:27.528094    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:27.528210    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 3964
	I0917 02:10:27.529100    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid 3964 missing from process table
	I0917 02:10:27.529122    4110 fix.go:112] recreateIfNeeded on ha-857000: state=Stopped err=<nil>
	I0917 02:10:27.529141    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	W0917 02:10:27.529225    4110 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:10:27.570570    4110 out.go:177] * Restarting existing hyperkit VM for "ha-857000" ...
	I0917 02:10:27.591801    4110 main.go:141] libmachine: (ha-857000) Calling .Start
	I0917 02:10:27.592089    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:27.592131    4110 main.go:141] libmachine: (ha-857000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid
	I0917 02:10:27.592193    4110 main.go:141] libmachine: (ha-857000) DBG | Using UUID f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c
	I0917 02:10:27.699994    4110 main.go:141] libmachine: (ha-857000) DBG | Generated MAC c2:63:2b:63:80:76
	I0917 02:10:27.700019    4110 main.go:141] libmachine: (ha-857000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:10:27.700136    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b6e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:10:27.700165    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b6e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:10:27.700210    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/ha-857000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:10:27.700256    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f6fb2ac0-c2e1-40b6-91c0-dbf4ea1b682c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/ha-857000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:10:27.700270    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:10:27.701709    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 DEBUG: hyperkit: Pid is 4124
	I0917 02:10:27.702059    4110 main.go:141] libmachine: (ha-857000) DBG | Attempt 0
	I0917 02:10:27.702070    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:27.702132    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 4124
	I0917 02:10:27.703343    4110 main.go:141] libmachine: (ha-857000) DBG | Searching for c2:63:2b:63:80:76 in /var/db/dhcpd_leases ...
	I0917 02:10:27.703398    4110 main.go:141] libmachine: (ha-857000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:10:27.703416    4110 main.go:141] libmachine: (ha-857000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66e94781}
	I0917 02:10:27.703422    4110 main.go:141] libmachine: (ha-857000) DBG | Found match: c2:63:2b:63:80:76
	I0917 02:10:27.703434    4110 main.go:141] libmachine: (ha-857000) DBG | IP: 192.169.0.5
	I0917 02:10:27.703500    4110 main.go:141] libmachine: (ha-857000) Calling .GetConfigRaw
	I0917 02:10:27.704135    4110 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:10:27.704313    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:27.704745    4110 machine.go:93] provisionDockerMachine start ...
	I0917 02:10:27.704755    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:27.704862    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:27.704967    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:27.705062    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:27.705172    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:27.705289    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:27.705426    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:27.705645    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:27.705655    4110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:10:27.709824    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:10:27.761328    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:10:27.762023    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:10:27.762037    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:10:27.762058    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:10:27.762068    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:10:28.142704    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:10:28.142720    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:10:28.257454    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:10:28.257477    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:10:28.257500    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:10:28.257510    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:10:28.258332    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:10:28.258356    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:10:33.845455    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:33 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:10:33.845506    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:33 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:10:33.845516    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:33 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:10:33.869458    4110 main.go:141] libmachine: (ha-857000) DBG | 2024/09/17 02:10:33 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:10:38.774269    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:10:38.774287    4110 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:10:38.774460    4110 buildroot.go:166] provisioning hostname "ha-857000"
	I0917 02:10:38.774470    4110 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:10:38.774556    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:38.774689    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:38.774787    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.774865    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.774959    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:38.775097    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:38.775254    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:38.775262    4110 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000 && echo "ha-857000" | sudo tee /etc/hostname
	I0917 02:10:38.842954    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000
	
	I0917 02:10:38.842972    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:38.843114    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:38.843224    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.843309    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.843398    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:38.843557    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:38.843701    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:38.843712    4110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:10:38.908790    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:10:38.908811    4110 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:10:38.908824    4110 buildroot.go:174] setting up certificates
	I0917 02:10:38.908830    4110 provision.go:84] configureAuth start
	I0917 02:10:38.908845    4110 main.go:141] libmachine: (ha-857000) Calling .GetMachineName
	I0917 02:10:38.908979    4110 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:10:38.909073    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:38.909177    4110 provision.go:143] copyHostCerts
	I0917 02:10:38.909208    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:10:38.909278    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:10:38.909287    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:10:38.909606    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:10:38.909812    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:10:38.909853    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:10:38.909857    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:10:38.909935    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:10:38.910085    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:10:38.910127    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:10:38.910132    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:10:38.910214    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:10:38.910362    4110 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000 san=[127.0.0.1 192.169.0.5 ha-857000 localhost minikube]
	I0917 02:10:38.962566    4110 provision.go:177] copyRemoteCerts
	I0917 02:10:38.962618    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:10:38.962632    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:38.962737    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:38.962836    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:38.962932    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:38.963020    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:38.998776    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:10:38.998851    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:10:39.018683    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:10:39.018741    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0917 02:10:39.038754    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:10:39.038814    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:10:39.058064    4110 provision.go:87] duration metric: took 149.217348ms to configureAuth
	I0917 02:10:39.058076    4110 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:10:39.058257    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:10:39.058270    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:39.058416    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:39.058513    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:39.058598    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.058689    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.058780    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:39.058915    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:39.059035    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:39.059042    4110 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:10:39.117847    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:10:39.117859    4110 buildroot.go:70] root file system type: tmpfs
	I0917 02:10:39.117937    4110 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:10:39.117952    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:39.118078    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:39.118171    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.118258    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.118338    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:39.118469    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:39.118616    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:39.118663    4110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:10:39.186097    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:10:39.186120    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:39.186247    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:39.186347    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.186426    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:39.186527    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:39.186659    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:39.186806    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:39.186817    4110 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:10:40.814202    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:10:40.814217    4110 machine.go:96] duration metric: took 13.109237782s to provisionDockerMachine
	I0917 02:10:40.814229    4110 start.go:293] postStartSetup for "ha-857000" (driver="hyperkit")
	I0917 02:10:40.814236    4110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:10:40.814246    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:40.814438    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:10:40.814456    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:40.814571    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:40.814667    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:40.814762    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:40.814848    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:40.854204    4110 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:10:40.857656    4110 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:10:40.857668    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:10:40.857773    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:10:40.857955    4110 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:10:40.857962    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:10:40.858166    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:10:40.867201    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:10:40.895727    4110 start.go:296] duration metric: took 81.487995ms for postStartSetup
	I0917 02:10:40.895754    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:40.895937    4110 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:10:40.895964    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:40.896062    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:40.896140    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:40.896211    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:40.896292    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:40.931812    4110 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:10:40.931872    4110 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:10:40.965671    4110 fix.go:56] duration metric: took 13.447980679s for fixHost
	I0917 02:10:40.965693    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:40.965831    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:40.965924    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:40.966013    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:40.966122    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:40.966261    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:40.966403    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0917 02:10:40.966410    4110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:10:41.023835    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726564240.935930388
	
	I0917 02:10:41.023847    4110 fix.go:216] guest clock: 1726564240.935930388
	I0917 02:10:41.023853    4110 fix.go:229] Guest: 2024-09-17 02:10:40.935930388 -0700 PDT Remote: 2024-09-17 02:10:40.965683 -0700 PDT m=+13.896006994 (delta=-29.752612ms)
	I0917 02:10:41.023870    4110 fix.go:200] guest clock delta is within tolerance: -29.752612ms
	I0917 02:10:41.023873    4110 start.go:83] releasing machines lock for "ha-857000", held for 13.506240986s
	I0917 02:10:41.023893    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:41.024017    4110 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:10:41.024124    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:41.024416    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:41.024496    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:10:41.024577    4110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:10:41.024607    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:41.024622    4110 ssh_runner.go:195] Run: cat /version.json
	I0917 02:10:41.024633    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:10:41.024692    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:41.024731    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:10:41.024799    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:41.024812    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:10:41.024882    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:41.024908    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:10:41.025002    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:41.025031    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:10:41.057444    4110 ssh_runner.go:195] Run: systemctl --version
	I0917 02:10:41.119261    4110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 02:10:41.123760    4110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:10:41.123809    4110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:10:41.136297    4110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:10:41.136307    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:10:41.136412    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:10:41.153182    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:10:41.162387    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:10:41.171363    4110 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:10:41.171411    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:10:41.180339    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:10:41.189205    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:10:41.198331    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:10:41.207214    4110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:10:41.216288    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:10:41.225185    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:10:41.234170    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:10:41.243192    4110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:10:41.251363    4110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:10:41.259648    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:41.359254    4110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:10:41.378053    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:10:41.378144    4110 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:10:41.391608    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:10:41.406431    4110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:10:41.426598    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:10:41.437654    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:10:41.448507    4110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:10:41.470118    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:10:41.481632    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:10:41.496609    4110 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:10:41.499690    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:10:41.507723    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:10:41.520894    4110 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:10:41.633690    4110 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:10:41.735063    4110 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:10:41.735129    4110 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:10:41.749181    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:41.842846    4110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:10:44.137188    4110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.294283491s)
	I0917 02:10:44.137256    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:10:44.147554    4110 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:10:44.160480    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:10:44.170998    4110 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:10:44.262329    4110 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:10:44.355414    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:44.456404    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:10:44.470268    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:10:44.481488    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:44.585298    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:10:44.651024    4110 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:10:44.651127    4110 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:10:44.655468    4110 start.go:563] Will wait 60s for crictl version
	I0917 02:10:44.655523    4110 ssh_runner.go:195] Run: which crictl
	I0917 02:10:44.660816    4110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:10:44.685805    4110 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:10:44.685900    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:10:44.701620    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:10:44.762577    4110 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:10:44.762643    4110 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:10:44.763055    4110 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:10:44.767764    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:10:44.778676    4110 kubeadm.go:883] updating cluster {Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 02:10:44.778770    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:10:44.778845    4110 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:10:44.792490    4110 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:10:44.792502    4110 docker.go:615] Images already preloaded, skipping extraction
	I0917 02:10:44.792587    4110 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:10:44.806122    4110 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:10:44.806141    4110 cache_images.go:84] Images are preloaded, skipping loading
	I0917 02:10:44.806152    4110 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0917 02:10:44.806226    4110 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:10:44.806308    4110 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 02:10:44.838425    4110 cni.go:84] Creating CNI manager for ""
	I0917 02:10:44.838438    4110 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 02:10:44.838451    4110 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 02:10:44.838467    4110 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-857000 NodeName:ha-857000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 02:10:44.838548    4110 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-857000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 02:10:44.838565    4110 kube-vip.go:115] generating kube-vip config ...
	I0917 02:10:44.838624    4110 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 02:10:44.852006    4110 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 02:10:44.852072    4110 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 02:10:44.852126    4110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:10:44.861875    4110 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:10:44.861926    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 02:10:44.870065    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0917 02:10:44.883323    4110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:10:44.896671    4110 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0917 02:10:44.910190    4110 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 02:10:44.923776    4110 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:10:44.926683    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:10:44.936751    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:10:45.031050    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:10:45.045803    4110 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.5
	I0917 02:10:45.045815    4110 certs.go:194] generating shared ca certs ...
	I0917 02:10:45.045826    4110 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:10:45.046013    4110 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:10:45.046090    4110 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:10:45.046101    4110 certs.go:256] generating profile certs ...
	I0917 02:10:45.046208    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key
	I0917 02:10:45.046290    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.34d356ea
	I0917 02:10:45.046357    4110 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key
	I0917 02:10:45.046364    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:10:45.046385    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:10:45.046406    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:10:45.046424    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:10:45.046442    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 02:10:45.046474    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 02:10:45.046503    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 02:10:45.046520    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 02:10:45.046624    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:10:45.046679    4110 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:10:45.046688    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:10:45.046749    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:10:45.046790    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:10:45.046829    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:10:45.046908    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:10:45.046945    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:10:45.046966    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:10:45.046984    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:10:45.047483    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:10:45.080356    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:10:45.112920    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:10:45.138450    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:10:45.175252    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 02:10:45.218044    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 02:10:45.251977    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:10:45.309085    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 02:10:45.353596    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:10:45.384476    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:10:45.404778    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:10:45.423525    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 02:10:45.437207    4110 ssh_runner.go:195] Run: openssl version
	I0917 02:10:45.441704    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:10:45.450346    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:10:45.453899    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:10:45.453945    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:10:45.458361    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:10:45.466854    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:10:45.475379    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:10:45.478924    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:10:45.478963    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:10:45.483279    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:10:45.491638    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:10:45.500375    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:10:45.504070    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:10:45.504128    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:10:45.508583    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:10:45.516977    4110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:10:45.520582    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:10:45.524889    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:10:45.529282    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:10:45.533668    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:10:45.538022    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:10:45.542262    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:10:45.546447    4110 kubeadm.go:392] StartCluster: {Name:ha-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:10:45.546579    4110 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 02:10:45.558935    4110 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 02:10:45.566714    4110 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 02:10:45.566724    4110 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 02:10:45.566760    4110 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 02:10:45.574257    4110 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:10:45.574553    4110 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-857000" does not appear in /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:10:45.574638    4110 kubeconfig.go:62] /Users/jenkins/minikube-integration/19648-1025/kubeconfig needs updating (will repair): [kubeconfig missing "ha-857000" cluster setting kubeconfig missing "ha-857000" context setting]
	I0917 02:10:45.574818    4110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:10:45.575437    4110 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:10:45.575640    4110 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x673a720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:10:45.575954    4110 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 02:10:45.576155    4110 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 02:10:45.583535    4110 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0917 02:10:45.583548    4110 kubeadm.go:597] duration metric: took 16.820219ms to restartPrimaryControlPlane
	I0917 02:10:45.583553    4110 kubeadm.go:394] duration metric: took 37.114772ms to StartCluster
	I0917 02:10:45.583562    4110 settings.go:142] acquiring lock: {Name:mk5de70c670a23cf49fdbd19ddb5c1d2a9de9a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:10:45.583637    4110 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:10:45.584029    4110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:10:45.584244    4110 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:10:45.584257    4110 start.go:241] waiting for startup goroutines ...
	I0917 02:10:45.584290    4110 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 02:10:45.584399    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:10:45.629290    4110 out.go:177] * Enabled addons: 
	I0917 02:10:45.650483    4110 addons.go:510] duration metric: took 66.114939ms for enable addons: enabled=[]
	I0917 02:10:45.650526    4110 start.go:246] waiting for cluster config update ...
	I0917 02:10:45.650541    4110 start.go:255] writing updated cluster config ...
	I0917 02:10:45.672110    4110 out.go:201] 
	I0917 02:10:45.693671    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:10:45.693812    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:45.716376    4110 out.go:177] * Starting "ha-857000-m02" control-plane node in "ha-857000" cluster
	I0917 02:10:45.758138    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:10:45.758205    4110 cache.go:56] Caching tarball of preloaded images
	I0917 02:10:45.758422    4110 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:10:45.758440    4110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:10:45.758566    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:45.759523    4110 start.go:360] acquireMachinesLock for ha-857000-m02: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:10:45.759643    4110 start.go:364] duration metric: took 94.526µs to acquireMachinesLock for "ha-857000-m02"
	I0917 02:10:45.759684    4110 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:10:45.759694    4110 fix.go:54] fixHost starting: m02
	I0917 02:10:45.760135    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:10:45.760170    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:10:45.769422    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52114
	I0917 02:10:45.769778    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:10:45.770120    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:10:45.770130    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:10:45.770332    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:10:45.770446    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:10:45.770540    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetState
	I0917 02:10:45.770620    4110 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:45.770696    4110 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 3976
	I0917 02:10:45.771617    4110 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid 3976 missing from process table
	I0917 02:10:45.771641    4110 fix.go:112] recreateIfNeeded on ha-857000-m02: state=Stopped err=<nil>
	I0917 02:10:45.771648    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	W0917 02:10:45.771734    4110 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:10:45.793214    4110 out.go:177] * Restarting existing hyperkit VM for "ha-857000-m02" ...
	I0917 02:10:45.835194    4110 main.go:141] libmachine: (ha-857000-m02) Calling .Start
	I0917 02:10:45.835422    4110 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:45.835478    4110 main.go:141] libmachine: (ha-857000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid
	I0917 02:10:45.836481    4110 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid 3976 missing from process table
	I0917 02:10:45.836493    4110 main.go:141] libmachine: (ha-857000-m02) DBG | pid 3976 is in state "Stopped"
	I0917 02:10:45.836506    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid...
	I0917 02:10:45.836730    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Using UUID 1940a045-0b1b-4b28-842d-d4858a62cbd3
	I0917 02:10:45.862461    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Generated MAC 9a:95:4e:4b:65:fe
	I0917 02:10:45.862487    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:10:45.862599    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1940a045-0b1b-4b28-842d-d4858a62cbd3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:10:45.862645    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1940a045-0b1b-4b28-842d-d4858a62cbd3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:10:45.862683    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1940a045-0b1b-4b28-842d-d4858a62cbd3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/ha-857000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machine
s/ha-857000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:10:45.862720    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1940a045-0b1b-4b28-842d-d4858a62cbd3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/ha-857000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:10:45.862741    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:10:45.864138    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 DEBUG: hyperkit: Pid is 4131
	I0917 02:10:45.864563    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Attempt 0
	I0917 02:10:45.864573    4110 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:10:45.864635    4110 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 4131
	I0917 02:10:45.866426    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Searching for 9a:95:4e:4b:65:fe in /var/db/dhcpd_leases ...
	I0917 02:10:45.866511    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:10:45.866527    4110 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:10:45.866546    4110 main.go:141] libmachine: (ha-857000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea9817}
	I0917 02:10:45.866556    4110 main.go:141] libmachine: (ha-857000-m02) DBG | Found match: 9a:95:4e:4b:65:fe
	I0917 02:10:45.866585    4110 main.go:141] libmachine: (ha-857000-m02) DBG | IP: 192.169.0.6
	I0917 02:10:45.866617    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetConfigRaw
	I0917 02:10:45.867379    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:10:45.867624    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:10:45.868172    4110 machine.go:93] provisionDockerMachine start ...
	I0917 02:10:45.868192    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:10:45.868319    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:10:45.868433    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:10:45.868540    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:10:45.868629    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:10:45.868743    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:10:45.868892    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:10:45.869038    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:10:45.869047    4110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:10:45.871979    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:10:45.880237    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:10:45.881261    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:10:45.881280    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:10:45.881317    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:10:45.881331    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:10:46.263104    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:10:46.263119    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:10:46.377844    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:10:46.377864    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:10:46.377874    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:10:46.377890    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:10:46.378727    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:10:46.378736    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:46 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:10:51.977750    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:51 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 02:10:51.977833    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:51 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 02:10:51.977841    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:51 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 02:10:52.002295    4110 main.go:141] libmachine: (ha-857000-m02) DBG | 2024/09/17 02:10:52 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 02:11:20.931384    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:11:20.931398    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:11:20.931549    4110 buildroot.go:166] provisioning hostname "ha-857000-m02"
	I0917 02:11:20.931560    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:11:20.931664    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:20.931762    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:20.931855    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:20.931937    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:20.932033    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:20.932169    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:20.932351    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:20.932359    4110 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000-m02 && echo "ha-857000-m02" | sudo tee /etc/hostname
	I0917 02:11:20.993183    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000-m02
	
	I0917 02:11:20.993198    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:20.993326    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:20.993440    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:20.993526    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:20.993618    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:20.993763    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:20.993914    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:20.993925    4110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:11:21.050925    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:11:21.050951    4110 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:11:21.050960    4110 buildroot.go:174] setting up certificates
	I0917 02:11:21.050966    4110 provision.go:84] configureAuth start
	I0917 02:11:21.050972    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetMachineName
	I0917 02:11:21.051109    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:11:21.051192    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.051304    4110 provision.go:143] copyHostCerts
	I0917 02:11:21.051330    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:11:21.051388    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:11:21.051394    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:11:21.051551    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:11:21.051732    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:11:21.051778    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:11:21.051784    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:11:21.051862    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:11:21.051999    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:11:21.052037    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:11:21.052041    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:11:21.052127    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:11:21.052261    4110 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000-m02 san=[127.0.0.1 192.169.0.6 ha-857000-m02 localhost minikube]
	I0917 02:11:21.131473    4110 provision.go:177] copyRemoteCerts
	I0917 02:11:21.131534    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:11:21.131551    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.131683    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:21.131772    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.131866    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:21.131988    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:11:21.165457    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:11:21.165530    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:11:21.185353    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:11:21.185424    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 02:11:21.204885    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:11:21.204944    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 02:11:21.224555    4110 provision.go:87] duration metric: took 173.578725ms to configureAuth
	I0917 02:11:21.224572    4110 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:11:21.224752    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:21.224765    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:21.224898    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.224985    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:21.225071    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.225151    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.225226    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:21.225334    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:21.225453    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:21.225471    4110 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:11:21.276594    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:11:21.276610    4110 buildroot.go:70] root file system type: tmpfs
	I0917 02:11:21.276682    4110 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:11:21.276692    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.276824    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:21.276911    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.276982    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.277068    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:21.277206    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:21.277343    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:21.277390    4110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:11:21.338440    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:11:21.338457    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:21.338602    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:21.338693    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.338786    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:21.338878    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:21.339018    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:21.339165    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:21.339180    4110 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:11:23.000541    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:11:23.000557    4110 machine.go:96] duration metric: took 37.131734761s to provisionDockerMachine
	I0917 02:11:23.000565    4110 start.go:293] postStartSetup for "ha-857000-m02" (driver="hyperkit")
	I0917 02:11:23.000572    4110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:11:23.000581    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.000771    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:11:23.000784    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.000877    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.000970    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.001060    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.001151    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:11:23.034070    4110 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:11:23.037044    4110 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:11:23.037054    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:11:23.037149    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:11:23.037326    4110 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:11:23.037333    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:11:23.037542    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:11:23.045540    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:11:23.064134    4110 start.go:296] duration metric: took 63.560241ms for postStartSetup
	I0917 02:11:23.064153    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.064355    4110 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:11:23.064367    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.064443    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.064537    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.064625    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.064699    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:11:23.096648    4110 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:11:23.096719    4110 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:11:23.150750    4110 fix.go:56] duration metric: took 37.39040777s for fixHost
	I0917 02:11:23.150781    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.150933    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.151043    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.151139    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.151225    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.151344    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:23.151480    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0917 02:11:23.151487    4110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:11:23.205108    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726564282.931256187
	
	I0917 02:11:23.205121    4110 fix.go:216] guest clock: 1726564282.931256187
	I0917 02:11:23.205126    4110 fix.go:229] Guest: 2024-09-17 02:11:22.931256187 -0700 PDT Remote: 2024-09-17 02:11:23.150765 -0700 PDT m=+56.080359699 (delta=-219.508813ms)
	I0917 02:11:23.205134    4110 fix.go:200] guest clock delta is within tolerance: -219.508813ms
	I0917 02:11:23.205138    4110 start.go:83] releasing machines lock for "ha-857000-m02", held for 37.444836088s
	I0917 02:11:23.205153    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.205283    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:11:23.226836    4110 out.go:177] * Found network options:
	I0917 02:11:23.247780    4110 out.go:177]   - NO_PROXY=192.169.0.5
	W0917 02:11:23.268466    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:23.268508    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.269341    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.269597    4110 main.go:141] libmachine: (ha-857000-m02) Calling .DriverName
	I0917 02:11:23.269778    4110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0917 02:11:23.269794    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:23.269828    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.269896    4110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:11:23.269915    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHHostname
	I0917 02:11:23.270129    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.270153    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHPort
	I0917 02:11:23.270351    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.270407    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHKeyPath
	I0917 02:11:23.270526    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.270571    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetSSHUsername
	I0917 02:11:23.270741    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	I0917 02:11:23.270760    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m02/id_rsa Username:docker}
	W0917 02:11:23.355936    4110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:11:23.356046    4110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:11:23.371785    4110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:11:23.371805    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:11:23.371897    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:11:23.389343    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:11:23.397507    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:11:23.405706    4110 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:11:23.405760    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:11:23.413954    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:11:23.422064    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:11:23.430077    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:11:23.438247    4110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:11:23.446615    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:11:23.455025    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:11:23.463904    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:11:23.472877    4110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:11:23.480886    4110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:11:23.488979    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:23.586431    4110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:11:23.605512    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:11:23.605590    4110 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:11:23.619031    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:11:23.632481    4110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:11:23.650301    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:11:23.661034    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:11:23.671499    4110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:11:23.693809    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:11:23.704324    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:11:23.719425    4110 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:11:23.722279    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:11:23.729409    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:11:23.743121    4110 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:11:23.848749    4110 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:11:23.947630    4110 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:11:23.947661    4110 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:11:23.965207    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:24.060164    4110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:11:26.333778    4110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.273556023s)
	I0917 02:11:26.333847    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:11:26.345198    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:11:26.355965    4110 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:11:26.461793    4110 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:11:26.556361    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:26.674366    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:11:26.687753    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:11:26.697698    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:26.797118    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:11:26.861306    4110 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:11:26.861392    4110 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:11:26.865857    4110 start.go:563] Will wait 60s for crictl version
	I0917 02:11:26.865915    4110 ssh_runner.go:195] Run: which crictl
	I0917 02:11:26.869732    4110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:11:26.894886    4110 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:11:26.894999    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:11:26.911893    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:11:26.950833    4110 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:11:26.972458    4110 out.go:177]   - env NO_PROXY=192.169.0.5
	I0917 02:11:26.993284    4110 main.go:141] libmachine: (ha-857000-m02) Calling .GetIP
	I0917 02:11:26.993711    4110 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:11:26.998329    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:11:27.008512    4110 mustload.go:65] Loading cluster: ha-857000
	I0917 02:11:27.008684    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:27.008920    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:11:27.008943    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:11:27.017607    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52136
	I0917 02:11:27.017941    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:11:27.018292    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:11:27.018310    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:11:27.018503    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:11:27.018620    4110 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:11:27.018699    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:11:27.018771    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 4124
	I0917 02:11:27.019715    4110 host.go:66] Checking if "ha-857000" exists ...
	I0917 02:11:27.019989    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:11:27.020015    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:11:27.028562    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52138
	I0917 02:11:27.028902    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:11:27.029241    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:11:27.029257    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:11:27.029461    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:11:27.029566    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:11:27.029665    4110 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.6
	I0917 02:11:27.029672    4110 certs.go:194] generating shared ca certs ...
	I0917 02:11:27.029680    4110 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:11:27.029857    4110 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:11:27.029930    4110 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:11:27.029938    4110 certs.go:256] generating profile certs ...
	I0917 02:11:27.030058    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key
	I0917 02:11:27.030140    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.d3e75930
	I0917 02:11:27.030214    4110 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key
	I0917 02:11:27.030221    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:11:27.030242    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:11:27.030266    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:11:27.030285    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:11:27.030303    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 02:11:27.030337    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 02:11:27.030366    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 02:11:27.030389    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 02:11:27.030486    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:11:27.030540    4110 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:11:27.030549    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:11:27.030587    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:11:27.030621    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:11:27.030651    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:11:27.030716    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:11:27.030753    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:11:27.030774    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:11:27.030792    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:11:27.030816    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:11:27.030911    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:11:27.031000    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:11:27.031078    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:11:27.031162    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:11:27.058778    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 02:11:27.062313    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 02:11:27.070939    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 02:11:27.074280    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 02:11:27.083003    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 02:11:27.086057    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 02:11:27.094554    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 02:11:27.097659    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 02:11:27.106657    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 02:11:27.109894    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 02:11:27.118370    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 02:11:27.121478    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 02:11:27.130386    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:11:27.150256    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:11:27.169526    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:11:27.188769    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:11:27.207966    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 02:11:27.227067    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 02:11:27.246289    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:11:27.265271    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 02:11:27.284669    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:11:27.303761    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:11:27.323113    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:11:27.342331    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 02:11:27.355765    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 02:11:27.369277    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 02:11:27.382837    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 02:11:27.396474    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 02:11:27.410313    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 02:11:27.423731    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 02:11:27.437366    4110 ssh_runner.go:195] Run: openssl version
	I0917 02:11:27.441447    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:11:27.450619    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:11:27.453941    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:11:27.453997    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:11:27.458171    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:11:27.467199    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:11:27.476144    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:11:27.479431    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:11:27.479473    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:11:27.483603    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:11:27.492580    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:11:27.501517    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:11:27.504871    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:11:27.504915    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:11:27.509027    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:11:27.517892    4110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:11:27.521155    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:11:27.525378    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:11:27.529633    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:11:27.533810    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:11:27.538003    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:11:27.542137    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:11:27.546288    4110 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.1 docker true true} ...
	I0917 02:11:27.546336    4110 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:11:27.546350    4110 kube-vip.go:115] generating kube-vip config ...
	I0917 02:11:27.546384    4110 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 02:11:27.558948    4110 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 02:11:27.558990    4110 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 02:11:27.559048    4110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:11:27.568292    4110 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:11:27.568351    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 02:11:27.577686    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 02:11:27.591394    4110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:11:27.604835    4110 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 02:11:27.618390    4110 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:11:27.621271    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:11:27.630851    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:27.729065    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:11:27.743762    4110 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:11:27.743972    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:27.765105    4110 out.go:177] * Verifying Kubernetes components...
	I0917 02:11:27.805899    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:27.933521    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:11:27.948089    4110 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:11:27.948282    4110 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x673a720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 02:11:27.948321    4110 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0917 02:11:27.948495    4110 node_ready.go:35] waiting up to 6m0s for node "ha-857000-m02" to be "Ready" ...
	I0917 02:11:27.948579    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:27.948584    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:27.948591    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:27.948595    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:28.948736    4110 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0917 02:11:28.948861    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:28.948870    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:28.948878    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:28.948882    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.256443    4110 round_trippers.go:574] Response Status: 200 OK in 7307 milliseconds
	I0917 02:11:36.257038    4110 node_ready.go:49] node "ha-857000-m02" has status "Ready":"True"
	I0917 02:11:36.257051    4110 node_ready.go:38] duration metric: took 8.308394835s for node "ha-857000-m02" to be "Ready" ...
	I0917 02:11:36.257061    4110 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:11:36.257098    4110 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 02:11:36.257107    4110 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 02:11:36.257147    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:36.257152    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.257158    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.257164    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.271996    4110 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0917 02:11:36.280676    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.280736    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:11:36.280742    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.280752    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.280756    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.307985    4110 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0917 02:11:36.308476    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.308484    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.308491    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.308501    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.312984    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:11:36.313392    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.313402    4110 pod_ready.go:82] duration metric: took 32.709315ms for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.313409    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.313452    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nl5j5
	I0917 02:11:36.313457    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.313463    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.313468    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.319771    4110 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 02:11:36.320384    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.320393    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.320400    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.320403    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.322816    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:36.323378    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.323388    4110 pod_ready.go:82] duration metric: took 9.97387ms for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.323395    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.323435    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000
	I0917 02:11:36.323440    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.323446    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.323450    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.327486    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:11:36.328047    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.328054    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.328060    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.328063    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.331571    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:36.332110    4110 pod_ready.go:93] pod "etcd-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.332121    4110 pod_ready.go:82] duration metric: took 8.720083ms for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.332128    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.332168    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m02
	I0917 02:11:36.332173    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.332179    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.332184    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.336324    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:11:36.336846    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:36.336854    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.336860    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.336864    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.340608    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:36.341048    4110 pod_ready.go:93] pod "etcd-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.341057    4110 pod_ready.go:82] duration metric: took 8.92351ms for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.341064    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.341104    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m03
	I0917 02:11:36.341110    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.341116    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.341121    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.343462    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:36.458248    4110 request.go:632] Waited for 114.333049ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:36.458307    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:36.458312    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.458318    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.458326    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.466021    4110 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 02:11:36.466526    4110 pod_ready.go:93] pod "etcd-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.466536    4110 pod_ready.go:82] duration metric: took 125.46489ms for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.466548    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.657514    4110 request.go:632] Waited for 190.921312ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:11:36.657567    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:11:36.657574    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.657579    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.657584    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.659804    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:36.857671    4110 request.go:632] Waited for 197.395211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.857701    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:36.857705    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:36.857711    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:36.857715    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:36.861065    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:36.861653    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:36.861669    4110 pod_ready.go:82] duration metric: took 395.104039ms for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:36.861677    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.057332    4110 request.go:632] Waited for 195.603008ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:11:37.057382    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:11:37.057387    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.057393    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.057398    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.060216    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:37.258671    4110 request.go:632] Waited for 197.954534ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:37.258706    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:37.258713    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.258721    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.258727    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.267718    4110 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 02:11:37.268069    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:37.268082    4110 pod_ready.go:82] duration metric: took 406.392892ms for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.268090    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.457925    4110 request.go:632] Waited for 189.791882ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:11:37.457975    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:11:37.457980    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.457987    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.457992    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.461663    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:37.658806    4110 request.go:632] Waited for 196.487027ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:37.658861    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:37.658867    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.658874    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.658878    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.661429    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:37.661888    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:37.661897    4110 pod_ready.go:82] duration metric: took 393.794602ms for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.661905    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:37.857414    4110 request.go:632] Waited for 195.469923ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:11:37.857468    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:11:37.857474    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:37.857481    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:37.857486    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:37.860019    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.057880    4110 request.go:632] Waited for 197.333642ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:38.057910    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:38.057915    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.057922    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.057927    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.060540    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.061091    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:38.061101    4110 pod_ready.go:82] duration metric: took 399.184022ms for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:38.061109    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:38.257757    4110 request.go:632] Waited for 196.608954ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:11:38.257857    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:11:38.257871    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.257877    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.257882    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.259904    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.458082    4110 request.go:632] Waited for 197.709678ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:38.458138    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:38.458147    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.458154    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.458158    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.460347    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.460715    4110 pod_ready.go:98] node "ha-857000-m02" hosting pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:38.460726    4110 pod_ready.go:82] duration metric: took 399.604676ms for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	E0917 02:11:38.460732    4110 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-857000-m02" hosting pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:38.460739    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:38.658188    4110 request.go:632] Waited for 197.403717ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:11:38.658255    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:11:38.658261    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.658267    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.658271    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.660934    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:38.857786    4110 request.go:632] Waited for 196.168284ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:38.857841    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:38.857851    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:38.857863    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:38.857873    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:38.861470    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:38.861751    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:38.861759    4110 pod_ready.go:82] duration metric: took 401.003253ms for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:38.861766    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.057800    4110 request.go:632] Waited for 195.986319ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:11:39.057882    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:11:39.057893    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.057904    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.057912    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.061639    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:39.257697    4110 request.go:632] Waited for 195.312452ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:11:39.257726    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:11:39.257731    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.257737    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.257741    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.260209    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:39.260462    4110 pod_ready.go:93] pod "kube-proxy-528ht" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:39.260471    4110 pod_ready.go:82] duration metric: took 398.692905ms for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.260478    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.459321    4110 request.go:632] Waited for 198.788481ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:11:39.459387    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:11:39.459394    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.459411    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.459422    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.461885    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:39.657441    4110 request.go:632] Waited for 195.121107ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:39.657541    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:39.657551    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.657579    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.657585    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.661441    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:39.661929    4110 pod_ready.go:93] pod "kube-proxy-g9wxm" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:39.661942    4110 pod_ready.go:82] duration metric: took 401.451734ms for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.661951    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:39.857721    4110 request.go:632] Waited for 195.727193ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:11:39.857785    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:11:39.857791    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:39.857797    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:39.857802    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:39.859663    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:11:40.058574    4110 request.go:632] Waited for 198.443343ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:40.058668    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:40.058679    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.058690    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.058699    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.062499    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:40.063124    4110 pod_ready.go:93] pod "kube-proxy-vskbj" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:40.063133    4110 pod_ready.go:82] duration metric: took 401.170349ms for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:40.063140    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:40.257873    4110 request.go:632] Waited for 194.653928ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:11:40.257927    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:11:40.257937    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.257948    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.257956    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.262255    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:11:40.458287    4110 request.go:632] Waited for 195.380222ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:40.458411    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:40.458421    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.458432    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.458443    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.462171    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:40.462629    4110 pod_ready.go:98] node "ha-857000-m02" hosting pod "kube-proxy-zrqvr" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:40.462643    4110 pod_ready.go:82] duration metric: took 399.490798ms for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	E0917 02:11:40.462673    4110 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-857000-m02" hosting pod "kube-proxy-zrqvr" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:40.462687    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:40.658101    4110 request.go:632] Waited for 195.359912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:11:40.658147    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:11:40.658152    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.658159    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.658164    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.660407    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:40.858455    4110 request.go:632] Waited for 197.559018ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:40.858564    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:11:40.858583    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:40.858595    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:40.858601    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:40.861876    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:40.862327    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:40.862336    4110 pod_ready.go:82] duration metric: took 399.635382ms for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:40.862343    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:41.057949    4110 request.go:632] Waited for 195.512959ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:11:41.058021    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:11:41.058032    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.058044    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.058051    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.061708    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:41.257802    4110 request.go:632] Waited for 195.475163ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:41.257884    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:11:41.257895    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.257906    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.257913    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.261190    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:41.261502    4110 pod_ready.go:98] node "ha-857000-m02" hosting pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:41.261513    4110 pod_ready.go:82] duration metric: took 399.156939ms for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	E0917 02:11:41.261527    4110 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-857000-m02" hosting pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-857000-m02" has status "Ready":"False"
	I0917 02:11:41.261532    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:41.458981    4110 request.go:632] Waited for 197.407496ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:11:41.459061    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:11:41.459070    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.459078    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.459084    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.461880    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:41.657846    4110 request.go:632] Waited for 195.542216ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:41.657906    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:11:41.657913    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.657921    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.657934    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.660204    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:41.660601    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:11:41.660610    4110 pod_ready.go:82] duration metric: took 399.066544ms for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:11:41.660617    4110 pod_ready.go:39] duration metric: took 5.403454072s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:11:41.660636    4110 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:11:41.660697    4110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:11:41.672821    4110 api_server.go:72] duration metric: took 13.928795458s to wait for apiserver process to appear ...
	I0917 02:11:41.672831    4110 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:11:41.672845    4110 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0917 02:11:41.683603    4110 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0917 02:11:41.683654    4110 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0917 02:11:41.683660    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.683666    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.683670    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.684276    4110 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 02:11:41.684340    4110 api_server.go:141] control plane version: v1.31.1
	I0917 02:11:41.684350    4110 api_server.go:131] duration metric: took 11.515194ms to wait for apiserver health ...
	I0917 02:11:41.684356    4110 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 02:11:41.857675    4110 request.go:632] Waited for 173.274042ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:41.857793    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:41.857803    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:41.857823    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:41.857833    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:41.863157    4110 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 02:11:41.868330    4110 system_pods.go:59] 26 kube-system pods found
	I0917 02:11:41.868348    4110 system_pods.go:61] "coredns-7c65d6cfc9-fg65r" [1690cf49-3cd5-45ba-bcff-6c2947fb1bef] Running
	I0917 02:11:41.868352    4110 system_pods.go:61] "coredns-7c65d6cfc9-nl5j5" [dad0f1c6-0feb-4024-b0ee-95776c68bae8] Running
	I0917 02:11:41.868360    4110 system_pods.go:61] "etcd-ha-857000" [e9da37c5-ad0f-47e0-b65f-361d2cb9f204] Running
	I0917 02:11:41.868366    4110 system_pods.go:61] "etcd-ha-857000-m02" [f540f85a-9556-44c0-a560-188721a58bd5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 02:11:41.868371    4110 system_pods.go:61] "etcd-ha-857000-m03" [46cb6c7e-73e2-49b0-986f-c63a23ffa29d] Running
	I0917 02:11:41.868377    4110 system_pods.go:61] "kindnet-4jk9v" [24a018c6-9cbb-4d17-a295-8fef456534a0] Running
	I0917 02:11:41.868392    4110 system_pods.go:61] "kindnet-7pf7v" [eecd1421-3a2f-4e48-b2b2-abcbef7869e7] Running
	I0917 02:11:41.868398    4110 system_pods.go:61] "kindnet-vc6z5" [a92e41cb-74d1-4bd4-bffd-b807c859efd3] Running
	I0917 02:11:41.868402    4110 system_pods.go:61] "kindnet-vh2h2" [00d83b6a-e87e-4c36-b073-36e67c76d67d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 02:11:41.868406    4110 system_pods.go:61] "kube-apiserver-ha-857000" [b5451c9b-3be2-4454-bed6-fdc48031180e] Running
	I0917 02:11:41.868424    4110 system_pods.go:61] "kube-apiserver-ha-857000-m02" [d043a0e6-6ab2-47b6-bb82-ceff496f8336] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 02:11:41.868430    4110 system_pods.go:61] "kube-apiserver-ha-857000-m03" [b3d91c89-8830-4dd9-8c20-4d6f821a3d88] Running
	I0917 02:11:41.868434    4110 system_pods.go:61] "kube-controller-manager-ha-857000" [f4b0f56c-8d7d-4567-a4eb-088805c36c54] Running
	I0917 02:11:41.868438    4110 system_pods.go:61] "kube-controller-manager-ha-857000-m02" [16670a48-0dc2-4665-9b1d-5fc25337ec58] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 02:11:41.868442    4110 system_pods.go:61] "kube-controller-manager-ha-857000-m03" [754fdff4-0e27-4f32-ab12-3a6695924396] Running
	I0917 02:11:41.868445    4110 system_pods.go:61] "kube-proxy-528ht" [18b29dac-e4bf-4f26-988f-b1ba4019f9bc] Running
	I0917 02:11:41.868448    4110 system_pods.go:61] "kube-proxy-g9wxm" [a5b974f1-ed28-4f42-8c86-6fed0bf32317] Running
	I0917 02:11:41.868450    4110 system_pods.go:61] "kube-proxy-vskbj" [7a396757-8954-48d2-b708-dcdfbab21dc7] Running
	I0917 02:11:41.868454    4110 system_pods.go:61] "kube-proxy-zrqvr" [bb98af15-e15a-4904-b68d-2de3c6e5864c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 02:11:41.868456    4110 system_pods.go:61] "kube-scheduler-ha-857000" [fdce329c-0340-4e0a-8bc1-295e4970708b] Running
	I0917 02:11:41.868468    4110 system_pods.go:61] "kube-scheduler-ha-857000-m02" [274256f9-cd98-4dd3-befc-35443dcca033] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 02:11:41.868473    4110 system_pods.go:61] "kube-scheduler-ha-857000-m03" [74d4633a-1c51-478d-9c68-d05c964089d9] Running
	I0917 02:11:41.868484    4110 system_pods.go:61] "kube-vip-ha-857000" [84b805d8-9a8f-4c6f-b18f-76c98ca4776c] Running
	I0917 02:11:41.868488    4110 system_pods.go:61] "kube-vip-ha-857000-m02" [b194d3e4-3b4d-4075-a6b1-f05d219393a0] Running
	I0917 02:11:41.868490    4110 system_pods.go:61] "kube-vip-ha-857000-m03" [29be8efa-f99f-460a-80bd-ccc25f608e48] Running
	I0917 02:11:41.868493    4110 system_pods.go:61] "storage-provisioner" [d81e7b55-a14e-4dc7-9193-ebe6914cdacf] Running
	I0917 02:11:41.868498    4110 system_pods.go:74] duration metric: took 184.134673ms to wait for pod list to return data ...
	I0917 02:11:41.868509    4110 default_sa.go:34] waiting for default service account to be created ...
	I0917 02:11:42.057457    4110 request.go:632] Waited for 188.887232ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:11:42.057501    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:11:42.057507    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:42.057512    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:42.057516    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:42.060122    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:42.060299    4110 default_sa.go:45] found service account: "default"
	I0917 02:11:42.060314    4110 default_sa.go:55] duration metric: took 191.792113ms for default service account to be created ...
	I0917 02:11:42.060320    4110 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 02:11:42.257458    4110 request.go:632] Waited for 197.098839ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:42.257490    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:11:42.257495    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:42.257501    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:42.257506    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:42.261392    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:11:42.267316    4110 system_pods.go:86] 26 kube-system pods found
	I0917 02:11:42.267336    4110 system_pods.go:89] "coredns-7c65d6cfc9-fg65r" [1690cf49-3cd5-45ba-bcff-6c2947fb1bef] Running
	I0917 02:11:42.267340    4110 system_pods.go:89] "coredns-7c65d6cfc9-nl5j5" [dad0f1c6-0feb-4024-b0ee-95776c68bae8] Running
	I0917 02:11:42.267343    4110 system_pods.go:89] "etcd-ha-857000" [e9da37c5-ad0f-47e0-b65f-361d2cb9f204] Running
	I0917 02:11:42.267356    4110 system_pods.go:89] "etcd-ha-857000-m02" [f540f85a-9556-44c0-a560-188721a58bd5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 02:11:42.267362    4110 system_pods.go:89] "etcd-ha-857000-m03" [46cb6c7e-73e2-49b0-986f-c63a23ffa29d] Running
	I0917 02:11:42.267366    4110 system_pods.go:89] "kindnet-4jk9v" [24a018c6-9cbb-4d17-a295-8fef456534a0] Running
	I0917 02:11:42.267369    4110 system_pods.go:89] "kindnet-7pf7v" [eecd1421-3a2f-4e48-b2b2-abcbef7869e7] Running
	I0917 02:11:42.267372    4110 system_pods.go:89] "kindnet-vc6z5" [a92e41cb-74d1-4bd4-bffd-b807c859efd3] Running
	I0917 02:11:42.267377    4110 system_pods.go:89] "kindnet-vh2h2" [00d83b6a-e87e-4c36-b073-36e67c76d67d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 02:11:42.267380    4110 system_pods.go:89] "kube-apiserver-ha-857000" [b5451c9b-3be2-4454-bed6-fdc48031180e] Running
	I0917 02:11:42.267385    4110 system_pods.go:89] "kube-apiserver-ha-857000-m02" [d043a0e6-6ab2-47b6-bb82-ceff496f8336] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 02:11:42.267389    4110 system_pods.go:89] "kube-apiserver-ha-857000-m03" [b3d91c89-8830-4dd9-8c20-4d6f821a3d88] Running
	I0917 02:11:42.267392    4110 system_pods.go:89] "kube-controller-manager-ha-857000" [f4b0f56c-8d7d-4567-a4eb-088805c36c54] Running
	I0917 02:11:42.267398    4110 system_pods.go:89] "kube-controller-manager-ha-857000-m02" [16670a48-0dc2-4665-9b1d-5fc25337ec58] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 02:11:42.267402    4110 system_pods.go:89] "kube-controller-manager-ha-857000-m03" [754fdff4-0e27-4f32-ab12-3a6695924396] Running
	I0917 02:11:42.267405    4110 system_pods.go:89] "kube-proxy-528ht" [18b29dac-e4bf-4f26-988f-b1ba4019f9bc] Running
	I0917 02:11:42.267408    4110 system_pods.go:89] "kube-proxy-g9wxm" [a5b974f1-ed28-4f42-8c86-6fed0bf32317] Running
	I0917 02:11:42.267411    4110 system_pods.go:89] "kube-proxy-vskbj" [7a396757-8954-48d2-b708-dcdfbab21dc7] Running
	I0917 02:11:42.267415    4110 system_pods.go:89] "kube-proxy-zrqvr" [bb98af15-e15a-4904-b68d-2de3c6e5864c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 02:11:42.267419    4110 system_pods.go:89] "kube-scheduler-ha-857000" [fdce329c-0340-4e0a-8bc1-295e4970708b] Running
	I0917 02:11:42.267423    4110 system_pods.go:89] "kube-scheduler-ha-857000-m02" [274256f9-cd98-4dd3-befc-35443dcca033] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 02:11:42.267427    4110 system_pods.go:89] "kube-scheduler-ha-857000-m03" [74d4633a-1c51-478d-9c68-d05c964089d9] Running
	I0917 02:11:42.267436    4110 system_pods.go:89] "kube-vip-ha-857000" [84b805d8-9a8f-4c6f-b18f-76c98ca4776c] Running
	I0917 02:11:42.267438    4110 system_pods.go:89] "kube-vip-ha-857000-m02" [b194d3e4-3b4d-4075-a6b1-f05d219393a0] Running
	I0917 02:11:42.267441    4110 system_pods.go:89] "kube-vip-ha-857000-m03" [29be8efa-f99f-460a-80bd-ccc25f608e48] Running
	I0917 02:11:42.267444    4110 system_pods.go:89] "storage-provisioner" [d81e7b55-a14e-4dc7-9193-ebe6914cdacf] Running
	I0917 02:11:42.267448    4110 system_pods.go:126] duration metric: took 207.120728ms to wait for k8s-apps to be running ...
	I0917 02:11:42.267459    4110 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 02:11:42.267525    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:11:42.280323    4110 system_svc.go:56] duration metric: took 12.855514ms WaitForService to wait for kubelet
	I0917 02:11:42.280342    4110 kubeadm.go:582] duration metric: took 14.536306226s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:11:42.280356    4110 node_conditions.go:102] verifying NodePressure condition ...
	I0917 02:11:42.458901    4110 request.go:632] Waited for 178.497588ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0917 02:11:42.458965    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0917 02:11:42.458970    4110 round_trippers.go:469] Request Headers:
	I0917 02:11:42.458975    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:11:42.458980    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:11:42.461607    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:11:42.462345    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:11:42.462358    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:11:42.462367    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:11:42.462370    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:11:42.462374    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:11:42.462377    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:11:42.462380    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:11:42.462383    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:11:42.462386    4110 node_conditions.go:105] duration metric: took 182.022805ms to run NodePressure ...
	I0917 02:11:42.462394    4110 start.go:241] waiting for startup goroutines ...
	I0917 02:11:42.462412    4110 start.go:255] writing updated cluster config ...
	I0917 02:11:42.484336    4110 out.go:201] 
	I0917 02:11:42.505774    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:42.505869    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:11:42.527331    4110 out.go:177] * Starting "ha-857000-m03" control-plane node in "ha-857000" cluster
	I0917 02:11:42.569515    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:11:42.569551    4110 cache.go:56] Caching tarball of preloaded images
	I0917 02:11:42.569751    4110 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:11:42.569769    4110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:11:42.569891    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:11:42.570622    4110 start.go:360] acquireMachinesLock for ha-857000-m03: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:11:42.570733    4110 start.go:364] duration metric: took 89.66µs to acquireMachinesLock for "ha-857000-m03"
	I0917 02:11:42.570758    4110 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:11:42.570766    4110 fix.go:54] fixHost starting: m03
	I0917 02:11:42.571203    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:11:42.571238    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:11:42.581037    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52144
	I0917 02:11:42.581469    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:11:42.581811    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:11:42.581822    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:11:42.582051    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:11:42.582209    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:42.582294    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetState
	I0917 02:11:42.582428    4110 main.go:141] libmachine: (ha-857000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:11:42.582545    4110 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid from json: 3442
	I0917 02:11:42.583498    4110 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid 3442 missing from process table
	I0917 02:11:42.583556    4110 fix.go:112] recreateIfNeeded on ha-857000-m03: state=Stopped err=<nil>
	I0917 02:11:42.583568    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	W0917 02:11:42.583655    4110 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:11:42.604438    4110 out.go:177] * Restarting existing hyperkit VM for "ha-857000-m03" ...
	I0917 02:11:42.678579    4110 main.go:141] libmachine: (ha-857000-m03) Calling .Start
	I0917 02:11:42.678864    4110 main.go:141] libmachine: (ha-857000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:11:42.678945    4110 main.go:141] libmachine: (ha-857000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/hyperkit.pid
	I0917 02:11:42.680796    4110 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid 3442 missing from process table
	I0917 02:11:42.680811    4110 main.go:141] libmachine: (ha-857000-m03) DBG | pid 3442 is in state "Stopped"
	I0917 02:11:42.680856    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/hyperkit.pid...
	I0917 02:11:42.681059    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Using UUID 3d8fdba9-dbf7-47ea-a80b-a24a99cad96e
	I0917 02:11:42.708058    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Generated MAC 16:4d:1d:5e:91:c8
	I0917 02:11:42.708080    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:11:42.708229    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3d8fdba9-dbf7-47ea-a80b-a24a99cad96e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000398c90)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:11:42.708256    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3d8fdba9-dbf7-47ea-a80b-a24a99cad96e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000398c90)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:11:42.708317    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3d8fdba9-dbf7-47ea-a80b-a24a99cad96e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/ha-857000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machine
s/ha-857000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:11:42.708369    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3d8fdba9-dbf7-47ea-a80b-a24a99cad96e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/ha-857000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:11:42.708386    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:11:42.710198    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 DEBUG: hyperkit: Pid is 4146
	I0917 02:11:42.710768    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Attempt 0
	I0917 02:11:42.710795    4110 main.go:141] libmachine: (ha-857000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:11:42.710847    4110 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid from json: 4146
	I0917 02:11:42.712907    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Searching for 16:4d:1d:5e:91:c8 in /var/db/dhcpd_leases ...
	I0917 02:11:42.712978    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:11:42.713009    4110 main.go:141] libmachine: (ha-857000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:11:42.713035    4110 main.go:141] libmachine: (ha-857000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:11:42.713060    4110 main.go:141] libmachine: (ha-857000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94661}
	I0917 02:11:42.713079    4110 main.go:141] libmachine: (ha-857000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9724}
	I0917 02:11:42.713098    4110 main.go:141] libmachine: (ha-857000-m03) DBG | Found match: 16:4d:1d:5e:91:c8
	I0917 02:11:42.713110    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetConfigRaw
	I0917 02:11:42.713129    4110 main.go:141] libmachine: (ha-857000-m03) DBG | IP: 192.169.0.7
	I0917 02:11:42.713812    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetIP
	I0917 02:11:42.714067    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:11:42.714634    4110 machine.go:93] provisionDockerMachine start ...
	I0917 02:11:42.714648    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:42.714804    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:42.714912    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:42.715030    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:42.715172    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:42.715275    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:42.715462    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:42.715719    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:42.715729    4110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:11:42.719370    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:11:42.729567    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:11:42.730522    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:11:42.730552    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:11:42.730564    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:11:42.730573    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:11:43.130217    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:11:43.130237    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:11:43.246057    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:11:43.246080    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:11:43.246089    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:11:43.246096    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:11:43.246900    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:11:43.246909    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:11:48.954281    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:48 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:11:48.954379    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:48 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:11:48.954390    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:48 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:11:48.977816    4110 main.go:141] libmachine: (ha-857000-m03) DBG | 2024/09/17 02:11:48 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:11:53.786367    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:11:53.786383    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetMachineName
	I0917 02:11:53.786507    4110 buildroot.go:166] provisioning hostname "ha-857000-m03"
	I0917 02:11:53.786518    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetMachineName
	I0917 02:11:53.786619    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:53.786716    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:53.786814    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:53.786901    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:53.786991    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:53.787125    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:53.787256    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:53.787264    4110 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000-m03 && echo "ha-857000-m03" | sudo tee /etc/hostname
	I0917 02:11:53.860809    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000-m03
	
	I0917 02:11:53.860831    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:53.860995    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:53.861092    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:53.861199    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:53.861302    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:53.861448    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:53.861610    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:53.861621    4110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:11:53.932575    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:11:53.932592    4110 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:11:53.932604    4110 buildroot.go:174] setting up certificates
	I0917 02:11:53.932611    4110 provision.go:84] configureAuth start
	I0917 02:11:53.932618    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetMachineName
	I0917 02:11:53.932757    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetIP
	I0917 02:11:53.932853    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:53.932933    4110 provision.go:143] copyHostCerts
	I0917 02:11:53.932962    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:11:53.933012    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:11:53.933018    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:11:53.933153    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:11:53.933356    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:11:53.933385    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:11:53.933389    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:11:53.933461    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:11:53.933602    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:11:53.933640    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:11:53.933645    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:11:53.933711    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:11:53.933855    4110 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000-m03 san=[127.0.0.1 192.169.0.7 ha-857000-m03 localhost minikube]
	I0917 02:11:54.077333    4110 provision.go:177] copyRemoteCerts
	I0917 02:11:54.077392    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:11:54.077407    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:54.077544    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:54.077643    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.077738    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:54.077820    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	I0917 02:11:54.116797    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:11:54.116876    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 02:11:54.136202    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:11:54.136278    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:11:54.156340    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:11:54.156419    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:11:54.175630    4110 provision.go:87] duration metric: took 243.006586ms to configureAuth
	I0917 02:11:54.175645    4110 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:11:54.175825    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:11:54.175845    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:54.175978    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:54.176072    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:54.176183    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.176286    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.176390    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:54.176544    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:54.176682    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:54.176690    4110 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:11:54.238979    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:11:54.238993    4110 buildroot.go:70] root file system type: tmpfs
	I0917 02:11:54.239102    4110 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:11:54.239114    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:54.239249    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:54.239359    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.239453    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.239547    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:54.239702    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:54.239844    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:54.239889    4110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:11:54.314599    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:11:54.314621    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:54.314767    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:54.314854    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.314947    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:54.315024    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:54.315150    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:54.315292    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:54.315304    4110 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:11:55.935197    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:11:55.935211    4110 machine.go:96] duration metric: took 13.220338614s to provisionDockerMachine
	I0917 02:11:55.935219    4110 start.go:293] postStartSetup for "ha-857000-m03" (driver="hyperkit")
	I0917 02:11:55.935226    4110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:11:55.935240    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:55.935436    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:11:55.935456    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:55.935555    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:55.935640    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:55.935720    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:55.935796    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	I0917 02:11:55.975655    4110 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:11:55.982326    4110 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:11:55.982340    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:11:55.982439    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:11:55.982583    4110 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:11:55.982589    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:11:55.982752    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:11:55.995355    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:11:56.016063    4110 start.go:296] duration metric: took 80.833975ms for postStartSetup
	I0917 02:11:56.016085    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.016278    4110 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:11:56.016292    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:56.016390    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:56.016474    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.016549    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:56.016621    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	I0917 02:11:56.056575    4110 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:11:56.056644    4110 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:11:56.090435    4110 fix.go:56] duration metric: took 13.519431085s for fixHost
	I0917 02:11:56.090460    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:56.090600    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:56.090686    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.090776    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.090860    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:56.090993    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:11:56.091136    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0917 02:11:56.091142    4110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:11:56.155623    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726564316.081021180
	
	I0917 02:11:56.155639    4110 fix.go:216] guest clock: 1726564316.081021180
	I0917 02:11:56.155645    4110 fix.go:229] Guest: 2024-09-17 02:11:56.08102118 -0700 PDT Remote: 2024-09-17 02:11:56.09045 -0700 PDT m=+89.019475712 (delta=-9.42882ms)
	I0917 02:11:56.155656    4110 fix.go:200] guest clock delta is within tolerance: -9.42882ms
	I0917 02:11:56.155660    4110 start.go:83] releasing machines lock for "ha-857000-m03", held for 13.584681554s
	I0917 02:11:56.155677    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.155816    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetIP
	I0917 02:11:56.177120    4110 out.go:177] * Found network options:
	I0917 02:11:56.197056    4110 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0917 02:11:56.217835    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:11:56.217862    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:56.217881    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.218511    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.218685    4110 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:11:56.218846    4110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0917 02:11:56.218876    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:56.218892    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	W0917 02:11:56.218898    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:11:56.219005    4110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:11:56.219024    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:11:56.219078    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:56.219246    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.219309    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:11:56.219439    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:56.219492    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:11:56.219585    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	I0917 02:11:56.219614    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:11:56.219751    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	W0917 02:11:56.256644    4110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:11:56.256720    4110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:11:56.309886    4110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:11:56.309904    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:11:56.309980    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:11:56.326165    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:11:56.334717    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:11:56.343026    4110 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:11:56.343079    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:11:56.351351    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:11:56.359978    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:11:56.368445    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:11:56.376813    4110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:11:56.385309    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:11:56.393895    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:11:56.402441    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:11:56.410891    4110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:11:56.418564    4110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:11:56.426298    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:56.529182    4110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:11:56.548629    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:11:56.548711    4110 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:11:56.564564    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:11:56.575668    4110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:11:56.592483    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:11:56.605747    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:11:56.616286    4110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:11:56.636099    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:11:56.646661    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:11:56.662025    4110 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:11:56.665163    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:11:56.672775    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:11:56.686783    4110 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:11:56.787618    4110 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:11:56.902014    4110 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:11:56.902043    4110 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:11:56.916683    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:57.010321    4110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:11:59.292351    4110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.28197073s)
	I0917 02:11:59.292423    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:11:59.302881    4110 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:11:59.315909    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:11:59.326097    4110 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:11:59.423622    4110 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:11:59.534194    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:59.650222    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:11:59.664197    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:11:59.675195    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:11:59.768785    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:11:59.834137    4110 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:11:59.834234    4110 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:11:59.838654    4110 start.go:563] Will wait 60s for crictl version
	I0917 02:11:59.838726    4110 ssh_runner.go:195] Run: which crictl
	I0917 02:11:59.844060    4110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:11:59.874850    4110 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:11:59.874944    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:11:59.893142    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:11:59.934010    4110 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:11:59.974908    4110 out.go:177]   - env NO_PROXY=192.169.0.5
	I0917 02:11:59.996010    4110 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0917 02:12:00.016678    4110 main.go:141] libmachine: (ha-857000-m03) Calling .GetIP
	I0917 02:12:00.016979    4110 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:12:00.020450    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:12:00.029942    4110 mustload.go:65] Loading cluster: ha-857000
	I0917 02:12:00.030121    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:00.030345    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:00.030368    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:00.039149    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52286
	I0917 02:12:00.039489    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:00.039838    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:00.039856    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:00.040084    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:00.040206    4110 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:12:00.040304    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:00.040367    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 4124
	I0917 02:12:00.041347    4110 host.go:66] Checking if "ha-857000" exists ...
	I0917 02:12:00.041604    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:00.041629    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:00.050248    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52288
	I0917 02:12:00.050590    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:00.050943    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:00.050963    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:00.051142    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:00.051249    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:12:00.051358    4110 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.7
	I0917 02:12:00.051364    4110 certs.go:194] generating shared ca certs ...
	I0917 02:12:00.051373    4110 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:12:00.051518    4110 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:12:00.051569    4110 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:12:00.051578    4110 certs.go:256] generating profile certs ...
	I0917 02:12:00.051672    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key
	I0917 02:12:00.051762    4110 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key.daf177bc
	I0917 02:12:00.051812    4110 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key
	I0917 02:12:00.051819    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:12:00.051841    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:12:00.051859    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:12:00.051878    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:12:00.051895    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 02:12:00.051919    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 02:12:00.051943    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 02:12:00.051962    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 02:12:00.052037    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:12:00.052085    4110 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:12:00.052093    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:12:00.052128    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:12:00.052160    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:12:00.052188    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:12:00.052263    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:12:00.052296    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:12:00.052317    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:00.052334    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:12:00.052362    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:12:00.052450    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:12:00.052535    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:12:00.052624    4110 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:12:00.052722    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:12:00.080096    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 02:12:00.083244    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 02:12:00.090969    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 02:12:00.094112    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 02:12:00.101834    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 02:12:00.104986    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 02:12:00.113430    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 02:12:00.116712    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 02:12:00.124546    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 02:12:00.127709    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 02:12:00.135587    4110 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 02:12:00.138750    4110 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 02:12:00.147884    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:12:00.168533    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:12:00.188900    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:12:00.208781    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:12:00.229275    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 02:12:00.248994    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 02:12:00.269569    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:12:00.289646    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 02:12:00.309509    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:12:00.329488    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:12:00.349487    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:12:00.369414    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 02:12:00.383327    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 02:12:00.396803    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 02:12:00.410693    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 02:12:00.424533    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 02:12:00.438144    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 02:12:00.451710    4110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 02:12:00.465698    4110 ssh_runner.go:195] Run: openssl version
	I0917 02:12:00.470190    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:12:00.478670    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:12:00.482005    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:12:00.482051    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:12:00.486183    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:12:00.494427    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:12:00.503098    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:00.506593    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:00.506643    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:00.510950    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:12:00.519387    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:12:00.527796    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:12:00.531174    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:12:00.531231    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:12:00.535528    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:12:00.543734    4110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:12:00.547058    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:12:00.551336    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:12:00.555666    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:12:00.560095    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:12:00.564671    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:12:00.568907    4110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:12:00.573116    4110 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.1 docker true true} ...
	I0917 02:12:00.573181    4110 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:12:00.573213    4110 kube-vip.go:115] generating kube-vip config ...
	I0917 02:12:00.573252    4110 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 02:12:00.585709    4110 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 02:12:00.585750    4110 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 02:12:00.585815    4110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:12:00.593621    4110 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:12:00.593672    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 02:12:00.600967    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 02:12:00.614925    4110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:12:00.628761    4110 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0917 02:12:00.642265    4110 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:12:00.645102    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:12:00.654336    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:00.752482    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:12:00.767122    4110 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:12:00.767316    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:00.788252    4110 out.go:177] * Verifying Kubernetes components...
	I0917 02:12:00.808843    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:00.927434    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:12:00.944321    4110 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:12:00.944565    4110 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x673a720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 02:12:00.944614    4110 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0917 02:12:00.944789    4110 node_ready.go:35] waiting up to 6m0s for node "ha-857000-m03" to be "Ready" ...
	I0917 02:12:00.944851    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:00.944858    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.944867    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.944872    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.946764    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:00.947061    4110 node_ready.go:49] node "ha-857000-m03" has status "Ready":"True"
	I0917 02:12:00.947072    4110 node_ready.go:38] duration metric: took 2.273862ms for node "ha-857000-m03" to be "Ready" ...
	I0917 02:12:00.947078    4110 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:12:00.947127    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:00.947133    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.947139    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.947143    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.950970    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:00.956449    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.956504    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:00.956511    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.956518    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.956526    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.959279    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.959653    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:00.959660    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.959666    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.959669    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.961657    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:00.962160    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:00.962170    4110 pod_ready.go:82] duration metric: took 5.706294ms for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.962176    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.962215    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nl5j5
	I0917 02:12:00.962221    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.962226    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.962230    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.966635    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:00.967113    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:00.967122    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.967128    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.967131    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.969231    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.969585    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:00.969594    4110 pod_ready.go:82] duration metric: took 7.413149ms for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.969601    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.969645    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000
	I0917 02:12:00.969650    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.969655    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.969659    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.971799    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.972247    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:00.972254    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.972264    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.972267    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.974411    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.974879    4110 pod_ready.go:93] pod "etcd-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:00.974888    4110 pod_ready.go:82] duration metric: took 5.282457ms for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.974895    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.974931    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m02
	I0917 02:12:00.974936    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.974941    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.974945    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.977288    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.977952    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:00.977959    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:00.977964    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:00.977966    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:00.980610    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:00.981051    4110 pod_ready.go:93] pod "etcd-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:00.981061    4110 pod_ready.go:82] duration metric: took 6.161283ms for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:00.981068    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.146340    4110 request.go:632] Waited for 165.222252ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m03
	I0917 02:12:01.146408    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m03
	I0917 02:12:01.146414    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.146420    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.146423    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.148663    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:01.345119    4110 request.go:632] Waited for 196.038973ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:01.345177    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:01.345186    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.345198    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.345210    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.348611    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:01.349143    4110 pod_ready.go:93] pod "etcd-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:01.349154    4110 pod_ready.go:82] duration metric: took 368.067559ms for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.349166    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.545007    4110 request.go:632] Waited for 195.782486ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:12:01.545050    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:12:01.545055    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.545061    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.545066    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.547602    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:01.745603    4110 request.go:632] Waited for 197.630153ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:01.745661    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:01.745667    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.745673    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.745676    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.748299    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:01.748902    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:01.748919    4110 pod_ready.go:82] duration metric: took 399.734114ms for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.748926    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:01.945883    4110 request.go:632] Waited for 196.866004ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:12:01.945945    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:12:01.945954    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:01.945964    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:01.945969    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:01.951958    4110 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 02:12:02.145413    4110 request.go:632] Waited for 192.798684ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:02.145468    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:02.145478    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.145511    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.145520    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.148357    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:02.149190    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:02.149203    4110 pod_ready.go:82] duration metric: took 400.265258ms for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:02.149211    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:02.345683    4110 request.go:632] Waited for 196.426528ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:02.345728    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:02.345736    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.345744    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.345751    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.348508    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:02.544925    4110 request.go:632] Waited for 196.020856ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:02.544994    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:02.545000    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.545006    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.545009    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.547483    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:02.744993    4110 request.go:632] Waited for 95.563815ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:02.745048    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:02.745054    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.745061    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.745065    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.747122    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:02.945441    4110 request.go:632] Waited for 197.559126ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:02.945475    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:02.945480    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:02.945486    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:02.945491    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:02.948036    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:03.150936    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:03.150968    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:03.150975    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:03.150980    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:03.153272    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:03.346424    4110 request.go:632] Waited for 192.442992ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:03.346514    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:03.346521    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:03.346528    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:03.346533    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:03.350998    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:03.649774    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:03.649809    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:03.649818    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:03.649823    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:03.652931    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:03.744972    4110 request.go:632] Waited for 90.967061ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:03.745023    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:03.745029    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:03.745034    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:03.745039    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:03.747431    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:04.149979    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:04.150024    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:04.150033    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:04.150037    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:04.153328    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:04.153812    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:04.153822    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:04.153828    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:04.153832    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:04.156074    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:04.156716    4110 pod_ready.go:103] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:04.650904    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:04.650924    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:04.650931    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:04.650946    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:04.653820    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:04.654378    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:04.654386    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:04.654393    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:04.654396    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:04.656654    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:05.151431    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:05.151485    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:05.151499    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:05.151506    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:05.154809    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:05.155323    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:05.155331    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:05.155337    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:05.155340    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:05.156965    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:05.650343    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:05.650367    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:05.650413    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:05.650421    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:05.653876    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:05.654508    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:05.654516    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:05.654522    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:05.654525    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:05.656260    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:06.149952    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:06.149982    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:06.149989    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:06.149994    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:06.152142    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:06.152594    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:06.152602    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:06.152608    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:06.152611    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:06.154378    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:06.650007    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:06.650040    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:06.650049    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:06.650053    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:06.652517    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:06.653131    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:06.653138    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:06.653144    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:06.653148    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:06.655153    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:06.655511    4110 pod_ready.go:103] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:07.150612    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:07.150642    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:07.150678    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:07.150687    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:07.153805    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:07.154498    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:07.154508    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:07.154516    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:07.154521    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:07.156264    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:07.650356    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:07.650381    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:07.650392    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:07.650401    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:07.653535    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:07.653958    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:07.653966    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:07.653972    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:07.653975    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:07.656337    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:08.150386    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:08.150440    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:08.150452    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:08.150460    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:08.153584    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:08.155108    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:08.155123    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:08.155132    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:08.155143    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:08.157038    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:08.650349    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:08.650377    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:08.650389    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:08.650398    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:08.654034    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:08.654828    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:08.654836    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:08.654843    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:08.654846    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:08.656625    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:08.656928    4110 pod_ready.go:103] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:09.151423    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:09.151447    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:09.151459    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:09.151464    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:09.154460    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:09.154947    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:09.154956    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:09.154961    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:09.154966    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:09.156555    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:09.650477    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:09.650503    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:09.650554    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:09.650568    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:09.653583    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:09.653960    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:09.653967    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:09.653973    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:09.653983    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:09.655828    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:10.149696    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:10.149720    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:10.149732    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:10.149739    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:10.153151    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:10.153716    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:10.153726    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:10.153734    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:10.153739    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:10.155758    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:10.649780    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:10.649830    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:10.649844    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:10.649854    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:10.653210    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:10.653938    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:10.653945    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:10.653951    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:10.653956    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:10.655718    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:11.149497    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:11.149512    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:11.149525    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:11.149530    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:11.151647    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:11.152174    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:11.152181    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:11.152187    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:11.152189    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:11.154098    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:11.154423    4110 pod_ready.go:103] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:11.650969    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:11.650998    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:11.651032    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:11.651039    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:11.654171    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:11.654962    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:11.654969    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:11.654975    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:11.654979    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:11.656692    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.150871    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:12.150884    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.150890    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.150893    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.153079    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:12.153733    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:12.153741    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.153747    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.153751    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.155608    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.650611    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:12.650636    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.650674    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.650684    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.654409    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:12.654934    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:12.654941    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.654951    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.654954    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.656676    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.657136    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:12.657145    4110 pod_ready.go:82] duration metric: took 10.507747852s for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.657152    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.657184    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:12:12.657189    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.657194    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.657198    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.658893    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.659304    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:12.659312    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.659317    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.659321    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.660920    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.661222    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:12.661230    4110 pod_ready.go:82] duration metric: took 4.073163ms for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.661237    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.661269    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:12:12.661274    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.661279    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.661282    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.662821    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.663178    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:12.663186    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.663192    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.663195    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.664635    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.665084    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:12.665092    4110 pod_ready.go:82] duration metric: took 3.849688ms for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.665098    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:12.665131    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:12.665136    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.665142    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.665157    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.666924    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:12.667551    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:12.667558    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:12.667564    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:12.667566    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:12.669116    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:13.165275    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:13.165342    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:13.165359    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:13.165367    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:13.168538    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:13.169042    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:13.169049    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:13.169054    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:13.169059    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:13.170903    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:13.665896    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:13.665914    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:13.665923    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:13.665930    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:13.668510    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:13.669059    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:13.669066    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:13.669071    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:13.669074    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:13.670842    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:14.165888    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:14.165910    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:14.165935    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:14.165941    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:14.168473    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:14.169111    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:14.169118    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:14.169124    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:14.169137    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:14.170994    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:14.667072    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:14.667128    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:14.667140    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:14.667151    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:14.670650    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:14.671210    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:14.671217    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:14.671222    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:14.671226    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:14.672859    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:14.673218    4110 pod_ready.go:103] pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:15.165335    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:15.165362    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:15.165375    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:15.165382    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:15.169212    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:15.169615    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:15.169623    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:15.169629    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:15.169633    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:15.171395    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:15.665422    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:15.665483    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:15.665498    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:15.665505    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:15.667889    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:15.668348    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:15.668356    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:15.668364    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:15.668369    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:15.670115    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.166085    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:16.166134    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.166147    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.166156    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.168879    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:16.169423    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:16.169430    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.169439    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.169442    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.171016    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.666749    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:16.666767    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.666797    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.666802    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.669480    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:16.669826    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:16.669832    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.669838    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.669842    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.671504    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.671930    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:16.671939    4110 pod_ready.go:82] duration metric: took 4.006767511s for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.671955    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.671990    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:12:16.671995    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.672000    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.672005    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.673862    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.674451    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:12:16.674459    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.674464    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.674468    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.676355    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.676667    4110 pod_ready.go:93] pod "kube-proxy-528ht" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:16.676675    4110 pod_ready.go:82] duration metric: took 4.715112ms for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.676682    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.676724    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:12:16.676729    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.676734    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.676738    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.678611    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.678986    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:16.678993    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.678999    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.679003    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.680713    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.681034    4110 pod_ready.go:93] pod "kube-proxy-g9wxm" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:16.681043    4110 pod_ready.go:82] duration metric: took 4.356651ms for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.681050    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.681091    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:12:16.681097    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.681102    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.681106    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.682940    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.683445    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:16.683452    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.683458    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.683462    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.685017    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:16.685461    4110 pod_ready.go:93] pod "kube-proxy-vskbj" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:16.685470    4110 pod_ready.go:82] duration metric: took 4.414596ms for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.685478    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:16.851971    4110 request.go:632] Waited for 166.418009ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:12:16.852035    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:12:16.852064    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:16.852076    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:16.852084    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:16.855683    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:17.050985    4110 request.go:632] Waited for 194.718198ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:17.051089    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:17.051098    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.051110    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.051119    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.054384    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:17.054876    4110 pod_ready.go:93] pod "kube-proxy-zrqvr" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:17.054889    4110 pod_ready.go:82] duration metric: took 369.398412ms for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.054898    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.250755    4110 request.go:632] Waited for 195.811261ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:12:17.250805    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:12:17.250817    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.250830    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.250841    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.291380    4110 round_trippers.go:574] Response Status: 200 OK in 40 milliseconds
	I0917 02:12:17.450914    4110 request.go:632] Waited for 157.443488ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:17.450956    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:17.450990    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.450996    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.450999    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.455828    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:17.456276    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:17.456286    4110 pod_ready.go:82] duration metric: took 401.376038ms for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.456294    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.651418    4110 request.go:632] Waited for 195.082221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:12:17.651455    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:12:17.651461    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.651471    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.651495    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.668422    4110 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0917 02:12:17.850764    4110 request.go:632] Waited for 181.996065ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:17.850819    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:17.850825    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:17.850832    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:17.850836    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:17.857947    4110 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 02:12:17.858420    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:17.858431    4110 pod_ready.go:82] duration metric: took 402.124989ms for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:17.858439    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:18.051442    4110 request.go:632] Waited for 192.93696ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:12:18.051483    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:12:18.051491    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.051499    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.051512    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.054127    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:18.250926    4110 request.go:632] Waited for 196.199352ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:18.250961    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:18.250966    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.251003    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.251008    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.274920    4110 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0917 02:12:18.275585    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:18.275595    4110 pod_ready.go:82] duration metric: took 417.143356ms for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:18.275606    4110 pod_ready.go:39] duration metric: took 17.328217726s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:12:18.275618    4110 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:12:18.275688    4110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:12:18.289040    4110 api_server.go:72] duration metric: took 17.521587147s to wait for apiserver process to appear ...
	I0917 02:12:18.289060    4110 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:12:18.289072    4110 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0917 02:12:18.292824    4110 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0917 02:12:18.292862    4110 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0917 02:12:18.292866    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.292872    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.292879    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.294137    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:18.294247    4110 api_server.go:141] control plane version: v1.31.1
	I0917 02:12:18.294257    4110 api_server.go:131] duration metric: took 5.192363ms to wait for apiserver health ...
	I0917 02:12:18.294263    4110 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 02:12:18.451185    4110 request.go:632] Waited for 156.882548ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:18.451216    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:18.451222    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.451248    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.451254    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.490169    4110 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0917 02:12:18.505194    4110 system_pods.go:59] 26 kube-system pods found
	I0917 02:12:18.505219    4110 system_pods.go:61] "coredns-7c65d6cfc9-fg65r" [1690cf49-3cd5-45ba-bcff-6c2947fb1bef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 02:12:18.505226    4110 system_pods.go:61] "coredns-7c65d6cfc9-nl5j5" [dad0f1c6-0feb-4024-b0ee-95776c68bae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 02:12:18.505231    4110 system_pods.go:61] "etcd-ha-857000" [e9da37c5-ad0f-47e0-b65f-361d2cb9f204] Running
	I0917 02:12:18.505234    4110 system_pods.go:61] "etcd-ha-857000-m02" [f540f85a-9556-44c0-a560-188721a58bd5] Running
	I0917 02:12:18.505237    4110 system_pods.go:61] "etcd-ha-857000-m03" [46cb6c7e-73e2-49b0-986f-c63a23ffa29d] Running
	I0917 02:12:18.505240    4110 system_pods.go:61] "kindnet-4jk9v" [24a018c6-9cbb-4d17-a295-8fef456534a0] Running
	I0917 02:12:18.505244    4110 system_pods.go:61] "kindnet-7pf7v" [eecd1421-3a2f-4e48-b2b2-abcbef7869e7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 02:12:18.505247    4110 system_pods.go:61] "kindnet-vc6z5" [a92e41cb-74d1-4bd4-bffd-b807c859efd3] Running
	I0917 02:12:18.505250    4110 system_pods.go:61] "kindnet-vh2h2" [00d83b6a-e87e-4c36-b073-36e67c76d67d] Running
	I0917 02:12:18.505273    4110 system_pods.go:61] "kube-apiserver-ha-857000" [b5451c9b-3be2-4454-bed6-fdc48031180e] Running
	I0917 02:12:18.505282    4110 system_pods.go:61] "kube-apiserver-ha-857000-m02" [d043a0e6-6ab2-47b6-bb82-ceff496f8336] Running
	I0917 02:12:18.505290    4110 system_pods.go:61] "kube-apiserver-ha-857000-m03" [b3d91c89-8830-4dd9-8c20-4d6f821a3d88] Running
	I0917 02:12:18.505313    4110 system_pods.go:61] "kube-controller-manager-ha-857000" [f4b0f56c-8d7d-4567-a4eb-088805c36c54] Running
	I0917 02:12:18.505323    4110 system_pods.go:61] "kube-controller-manager-ha-857000-m02" [16670a48-0dc2-4665-9b1d-5fc25337ec58] Running
	I0917 02:12:18.505338    4110 system_pods.go:61] "kube-controller-manager-ha-857000-m03" [754fdff4-0e27-4f32-ab12-3a6695924396] Running
	I0917 02:12:18.505343    4110 system_pods.go:61] "kube-proxy-528ht" [18b29dac-e4bf-4f26-988f-b1ba4019f9bc] Running
	I0917 02:12:18.505351    4110 system_pods.go:61] "kube-proxy-g9wxm" [a5b974f1-ed28-4f42-8c86-6fed0bf32317] Running
	I0917 02:12:18.505361    4110 system_pods.go:61] "kube-proxy-vskbj" [7a396757-8954-48d2-b708-dcdfbab21dc7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 02:12:18.505367    4110 system_pods.go:61] "kube-proxy-zrqvr" [bb98af15-e15a-4904-b68d-2de3c6e5864c] Running
	I0917 02:12:18.505373    4110 system_pods.go:61] "kube-scheduler-ha-857000" [fdce329c-0340-4e0a-8bc1-295e4970708b] Running
	I0917 02:12:18.505378    4110 system_pods.go:61] "kube-scheduler-ha-857000-m02" [274256f9-cd98-4dd3-befc-35443dcca033] Running
	I0917 02:12:18.505384    4110 system_pods.go:61] "kube-scheduler-ha-857000-m03" [74d4633a-1c51-478d-9c68-d05c964089d9] Running
	I0917 02:12:18.505388    4110 system_pods.go:61] "kube-vip-ha-857000" [84b805d8-9a8f-4c6f-b18f-76c98ca4776c] Running
	I0917 02:12:18.505392    4110 system_pods.go:61] "kube-vip-ha-857000-m02" [b194d3e4-3b4d-4075-a6b1-f05d219393a0] Running
	I0917 02:12:18.505396    4110 system_pods.go:61] "kube-vip-ha-857000-m03" [29be8efa-f99f-460a-80bd-ccc25f608e48] Running
	I0917 02:12:18.505399    4110 system_pods.go:61] "storage-provisioner" [d81e7b55-a14e-4dc7-9193-ebe6914cdacf] Running
	I0917 02:12:18.505406    4110 system_pods.go:74] duration metric: took 211.134036ms to wait for pod list to return data ...
	I0917 02:12:18.505413    4110 default_sa.go:34] waiting for default service account to be created ...
	I0917 02:12:18.650733    4110 request.go:632] Waited for 145.255733ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:12:18.650776    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:12:18.650782    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.650793    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.650798    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.659108    4110 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 02:12:18.659203    4110 default_sa.go:45] found service account: "default"
	I0917 02:12:18.659217    4110 default_sa.go:55] duration metric: took 153.795915ms for default service account to be created ...
	I0917 02:12:18.659227    4110 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 02:12:18.851528    4110 request.go:632] Waited for 192.225662ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:18.851585    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:18.851591    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:18.851597    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:18.851600    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:18.855716    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:18.861599    4110 system_pods.go:86] 26 kube-system pods found
	I0917 02:12:18.861618    4110 system_pods.go:89] "coredns-7c65d6cfc9-fg65r" [1690cf49-3cd5-45ba-bcff-6c2947fb1bef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 02:12:18.861630    4110 system_pods.go:89] "coredns-7c65d6cfc9-nl5j5" [dad0f1c6-0feb-4024-b0ee-95776c68bae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 02:12:18.861635    4110 system_pods.go:89] "etcd-ha-857000" [e9da37c5-ad0f-47e0-b65f-361d2cb9f204] Running
	I0917 02:12:18.861638    4110 system_pods.go:89] "etcd-ha-857000-m02" [f540f85a-9556-44c0-a560-188721a58bd5] Running
	I0917 02:12:18.861642    4110 system_pods.go:89] "etcd-ha-857000-m03" [46cb6c7e-73e2-49b0-986f-c63a23ffa29d] Running
	I0917 02:12:18.861645    4110 system_pods.go:89] "kindnet-4jk9v" [24a018c6-9cbb-4d17-a295-8fef456534a0] Running
	I0917 02:12:18.861649    4110 system_pods.go:89] "kindnet-7pf7v" [eecd1421-3a2f-4e48-b2b2-abcbef7869e7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 02:12:18.861653    4110 system_pods.go:89] "kindnet-vc6z5" [a92e41cb-74d1-4bd4-bffd-b807c859efd3] Running
	I0917 02:12:18.861657    4110 system_pods.go:89] "kindnet-vh2h2" [00d83b6a-e87e-4c36-b073-36e67c76d67d] Running
	I0917 02:12:18.861660    4110 system_pods.go:89] "kube-apiserver-ha-857000" [b5451c9b-3be2-4454-bed6-fdc48031180e] Running
	I0917 02:12:18.861663    4110 system_pods.go:89] "kube-apiserver-ha-857000-m02" [d043a0e6-6ab2-47b6-bb82-ceff496f8336] Running
	I0917 02:12:18.861666    4110 system_pods.go:89] "kube-apiserver-ha-857000-m03" [b3d91c89-8830-4dd9-8c20-4d6f821a3d88] Running
	I0917 02:12:18.861670    4110 system_pods.go:89] "kube-controller-manager-ha-857000" [f4b0f56c-8d7d-4567-a4eb-088805c36c54] Running
	I0917 02:12:18.861673    4110 system_pods.go:89] "kube-controller-manager-ha-857000-m02" [16670a48-0dc2-4665-9b1d-5fc25337ec58] Running
	I0917 02:12:18.861677    4110 system_pods.go:89] "kube-controller-manager-ha-857000-m03" [754fdff4-0e27-4f32-ab12-3a6695924396] Running
	I0917 02:12:18.861682    4110 system_pods.go:89] "kube-proxy-528ht" [18b29dac-e4bf-4f26-988f-b1ba4019f9bc] Running
	I0917 02:12:18.861685    4110 system_pods.go:89] "kube-proxy-g9wxm" [a5b974f1-ed28-4f42-8c86-6fed0bf32317] Running
	I0917 02:12:18.861690    4110 system_pods.go:89] "kube-proxy-vskbj" [7a396757-8954-48d2-b708-dcdfbab21dc7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 02:12:18.861694    4110 system_pods.go:89] "kube-proxy-zrqvr" [bb98af15-e15a-4904-b68d-2de3c6e5864c] Running
	I0917 02:12:18.861698    4110 system_pods.go:89] "kube-scheduler-ha-857000" [fdce329c-0340-4e0a-8bc1-295e4970708b] Running
	I0917 02:12:18.861701    4110 system_pods.go:89] "kube-scheduler-ha-857000-m02" [274256f9-cd98-4dd3-befc-35443dcca033] Running
	I0917 02:12:18.861704    4110 system_pods.go:89] "kube-scheduler-ha-857000-m03" [74d4633a-1c51-478d-9c68-d05c964089d9] Running
	I0917 02:12:18.861707    4110 system_pods.go:89] "kube-vip-ha-857000" [c577f2f1-ab99-4fbe-acc1-516a135f0377] Pending
	I0917 02:12:18.861710    4110 system_pods.go:89] "kube-vip-ha-857000-m02" [b194d3e4-3b4d-4075-a6b1-f05d219393a0] Running
	I0917 02:12:18.861713    4110 system_pods.go:89] "kube-vip-ha-857000-m03" [29be8efa-f99f-460a-80bd-ccc25f608e48] Running
	I0917 02:12:18.861715    4110 system_pods.go:89] "storage-provisioner" [d81e7b55-a14e-4dc7-9193-ebe6914cdacf] Running
	I0917 02:12:18.861720    4110 system_pods.go:126] duration metric: took 202.461636ms to wait for k8s-apps to be running ...
	I0917 02:12:18.861726    4110 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 02:12:18.861778    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:12:18.882032    4110 system_svc.go:56] duration metric: took 20.298661ms WaitForService to wait for kubelet
	I0917 02:12:18.882059    4110 kubeadm.go:582] duration metric: took 18.114595178s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:12:18.882083    4110 node_conditions.go:102] verifying NodePressure condition ...
	I0917 02:12:19.052878    4110 request.go:632] Waited for 170.643294ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0917 02:12:19.052938    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0917 02:12:19.052951    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:19.052966    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:19.052976    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:19.057011    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:19.057806    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:12:19.057817    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:12:19.057824    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:12:19.057827    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:12:19.057830    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:12:19.057834    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:12:19.057837    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:12:19.057840    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:12:19.057843    4110 node_conditions.go:105] duration metric: took 175.740836ms to run NodePressure ...
	I0917 02:12:19.057851    4110 start.go:241] waiting for startup goroutines ...
	I0917 02:12:19.057867    4110 start.go:255] writing updated cluster config ...
	I0917 02:12:19.079978    4110 out.go:201] 
	I0917 02:12:19.117280    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:19.117377    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:12:19.138898    4110 out.go:177] * Starting "ha-857000-m04" worker node in "ha-857000" cluster
	I0917 02:12:19.180945    4110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:12:19.180969    4110 cache.go:56] Caching tarball of preloaded images
	I0917 02:12:19.181086    4110 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:12:19.181097    4110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:12:19.181167    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:12:19.181757    4110 start.go:360] acquireMachinesLock for ha-857000-m04: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:12:19.181807    4110 start.go:364] duration metric: took 37.353µs to acquireMachinesLock for "ha-857000-m04"
	I0917 02:12:19.181825    4110 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:12:19.181830    4110 fix.go:54] fixHost starting: m04
	I0917 02:12:19.182086    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:19.182106    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:19.191065    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52292
	I0917 02:12:19.191452    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:19.191850    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:19.191867    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:19.192069    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:19.192186    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:19.192279    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetState
	I0917 02:12:19.192404    4110 main.go:141] libmachine: (ha-857000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:19.192500    4110 main.go:141] libmachine: (ha-857000-m04) DBG | hyperkit pid from json: 3550
	I0917 02:12:19.193450    4110 main.go:141] libmachine: (ha-857000-m04) DBG | hyperkit pid 3550 missing from process table
	I0917 02:12:19.193488    4110 fix.go:112] recreateIfNeeded on ha-857000-m04: state=Stopped err=<nil>
	I0917 02:12:19.193498    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	W0917 02:12:19.193587    4110 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:12:19.214824    4110 out.go:177] * Restarting existing hyperkit VM for "ha-857000-m04" ...
	I0917 02:12:19.289023    4110 main.go:141] libmachine: (ha-857000-m04) Calling .Start
	I0917 02:12:19.289295    4110 main.go:141] libmachine: (ha-857000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:19.289356    4110 main.go:141] libmachine: (ha-857000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/hyperkit.pid
	I0917 02:12:19.289453    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Using UUID 32bc812d-06ce-423b-90a4-5417ea5ec912
	I0917 02:12:19.319068    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Generated MAC a:b6:8:34:25:a6
	I0917 02:12:19.319111    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000
	I0917 02:12:19.319291    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"32bc812d-06ce-423b-90a4-5417ea5ec912", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000289980)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:12:19.319339    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"32bc812d-06ce-423b-90a4-5417ea5ec912", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000289980)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0917 02:12:19.319395    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "32bc812d-06ce-423b-90a4-5417ea5ec912", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/ha-857000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machine
s/ha-857000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"}
	I0917 02:12:19.319498    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 32bc812d-06ce-423b-90a4-5417ea5ec912 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/ha-857000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-857000"
	I0917 02:12:19.319538    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:12:19.321260    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 DEBUG: hyperkit: Pid is 4161
	I0917 02:12:19.321886    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Attempt 0
	I0917 02:12:19.321908    4110 main.go:141] libmachine: (ha-857000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:19.321989    4110 main.go:141] libmachine: (ha-857000-m04) DBG | hyperkit pid from json: 4161
	I0917 02:12:19.324366    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Searching for a:b6:8:34:25:a6 in /var/db/dhcpd_leases ...
	I0917 02:12:19.324461    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0917 02:12:19.324494    4110 main.go:141] libmachine: (ha-857000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:16:4d:1d:5e:91:c8 ID:1,16:4d:1d:5e:91:c8 Lease:0x66ea9957}
	I0917 02:12:19.324519    4110 main.go:141] libmachine: (ha-857000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:95:4e:4b:65:fe ID:1,9a:95:4e:4b:65:fe Lease:0x66ea991f}
	I0917 02:12:19.324537    4110 main.go:141] libmachine: (ha-857000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:c2:63:2b:63:80:76 ID:1,c2:63:2b:63:80:76 Lease:0x66ea990c}
	I0917 02:12:19.324552    4110 main.go:141] libmachine: (ha-857000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:b6:8:34:25:a6 ID:1,a:b6:8:34:25:a6 Lease:0x66e94661}
	I0917 02:12:19.324565    4110 main.go:141] libmachine: (ha-857000-m04) DBG | Found match: a:b6:8:34:25:a6
	I0917 02:12:19.324580    4110 main.go:141] libmachine: (ha-857000-m04) DBG | IP: 192.169.0.8
	I0917 02:12:19.324586    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetConfigRaw
	I0917 02:12:19.325317    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetIP
	I0917 02:12:19.325565    4110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/config.json ...
	I0917 02:12:19.326089    4110 machine.go:93] provisionDockerMachine start ...
	I0917 02:12:19.326109    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:19.326263    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:19.326401    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:19.326560    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:19.326727    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:19.326852    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:19.327048    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:19.327215    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:19.327223    4110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:12:19.329900    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:12:19.339917    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:12:19.340861    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:12:19.340880    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:12:19.340887    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:12:19.340906    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:12:19.732737    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:12:19.732752    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:12:19.847625    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:12:19.847643    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:12:19.847688    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:12:19.847715    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:12:19.848483    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:12:19.848501    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:12:25.591852    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:12:25.591915    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:12:25.591925    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:12:25.615174    4110 main.go:141] libmachine: (ha-857000-m04) DBG | 2024/09/17 02:12:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:12:29.572071    4110 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.8:22: connect: connection refused
	I0917 02:12:32.627647    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:12:32.627664    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetMachineName
	I0917 02:12:32.627799    4110 buildroot.go:166] provisioning hostname "ha-857000-m04"
	I0917 02:12:32.627808    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetMachineName
	I0917 02:12:32.627920    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.628014    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:32.628110    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.628210    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.628294    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:32.628431    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:32.628580    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:32.628587    4110 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-857000-m04 && echo "ha-857000-m04" | sudo tee /etc/hostname
	I0917 02:12:32.692963    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-857000-m04
	
	I0917 02:12:32.692980    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.693102    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:32.693193    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.693281    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.693375    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:32.693517    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:32.693670    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:32.693680    4110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:12:32.753597    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:12:32.753613    4110 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:12:32.753629    4110 buildroot.go:174] setting up certificates
	I0917 02:12:32.753635    4110 provision.go:84] configureAuth start
	I0917 02:12:32.753642    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetMachineName
	I0917 02:12:32.753783    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetIP
	I0917 02:12:32.753886    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.753973    4110 provision.go:143] copyHostCerts
	I0917 02:12:32.754002    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:12:32.754055    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:12:32.754061    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:12:32.754199    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:12:32.754425    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:12:32.754455    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:12:32.754465    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:12:32.754535    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:12:32.754684    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:12:32.754713    4110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:12:32.754717    4110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:12:32.754781    4110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:12:32.754925    4110 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.ha-857000-m04 san=[127.0.0.1 192.169.0.8 ha-857000-m04 localhost minikube]
	I0917 02:12:32.886815    4110 provision.go:177] copyRemoteCerts
	I0917 02:12:32.886883    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:12:32.886900    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.887049    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:32.887156    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.887265    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:32.887345    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	I0917 02:12:32.921412    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:12:32.921483    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:12:32.942093    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:12:32.942165    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 02:12:32.962202    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:12:32.962278    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:12:32.982539    4110 provision.go:87] duration metric: took 228.892121ms to configureAuth
	I0917 02:12:32.982555    4110 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:12:32.982734    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:32.982747    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:32.982882    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:32.982965    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:32.983053    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.983146    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:32.983222    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:32.983341    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:32.983471    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:32.983479    4110 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:12:33.039112    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:12:33.039126    4110 buildroot.go:70] root file system type: tmpfs
	I0917 02:12:33.039209    4110 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:12:33.039225    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:33.039356    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:33.039463    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:33.039553    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:33.039642    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:33.039765    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:33.039901    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:33.039948    4110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:12:33.105290    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:12:33.105311    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:33.105463    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:33.105557    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:33.105679    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:33.105803    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:33.106006    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:33.106166    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:33.106179    4110 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:12:34.690044    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:12:34.690061    4110 machine.go:96] duration metric: took 15.363692529s to provisionDockerMachine
	I0917 02:12:34.690069    4110 start.go:293] postStartSetup for "ha-857000-m04" (driver="hyperkit")
	I0917 02:12:34.690105    4110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:12:34.690128    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.690331    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:12:34.690344    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:34.690448    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.690550    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.690643    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.690734    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	I0917 02:12:34.729693    4110 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:12:34.733386    4110 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:12:34.733399    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:12:34.733491    4110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:12:34.733629    4110 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:12:34.733635    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:12:34.733801    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:12:34.743555    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:12:34.777005    4110 start.go:296] duration metric: took 86.908647ms for postStartSetup
	I0917 02:12:34.777029    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.777213    4110 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 02:12:34.777227    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:34.777324    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.777401    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.777484    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.777560    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	I0917 02:12:34.811015    4110 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0917 02:12:34.811085    4110 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0917 02:12:34.865249    4110 fix.go:56] duration metric: took 15.683145042s for fixHost
	I0917 02:12:34.865277    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:34.865435    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.865528    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.865626    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.865720    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.865866    4110 main.go:141] libmachine: Using SSH client type: native
	I0917 02:12:34.866008    4110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5064820] 0x5067500 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0917 02:12:34.866017    4110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:12:34.922683    4110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726564355.020144093
	
	I0917 02:12:34.922697    4110 fix.go:216] guest clock: 1726564355.020144093
	I0917 02:12:34.922703    4110 fix.go:229] Guest: 2024-09-17 02:12:35.020144093 -0700 PDT Remote: 2024-09-17 02:12:34.865267 -0700 PDT m=+127.793621612 (delta=154.877093ms)
	I0917 02:12:34.922714    4110 fix.go:200] guest clock delta is within tolerance: 154.877093ms
	I0917 02:12:34.922718    4110 start.go:83] releasing machines lock for "ha-857000-m04", held for 15.740632652s
	I0917 02:12:34.922744    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.922875    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetIP
	I0917 02:12:34.945234    4110 out.go:177] * Found network options:
	I0917 02:12:34.965134    4110 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0917 02:12:34.986412    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:12:34.986446    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:12:34.986459    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:12:34.986477    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.987363    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.987619    4110 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:12:34.987838    4110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0917 02:12:34.987863    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:12:34.987882    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	W0917 02:12:34.987901    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:12:34.987917    4110 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:12:34.988015    4110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:12:34.988040    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:12:34.988144    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.988241    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:12:34.988362    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.988430    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:12:34.988562    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.988636    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:12:34.988712    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	I0917 02:12:34.988798    4110 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	W0917 02:12:35.089466    4110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:12:35.089538    4110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:12:35.103798    4110 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:12:35.103814    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:12:35.103888    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:12:35.122855    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:12:35.131456    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:12:35.140120    4110 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:12:35.140187    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:12:35.148614    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:12:35.156897    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:12:35.165192    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:12:35.173754    4110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:12:35.182471    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:12:35.191008    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:12:35.199448    4110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:12:35.207926    4110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:12:35.216411    4110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:12:35.228568    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:35.327014    4110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:12:35.346549    4110 start.go:495] detecting cgroup driver to use...
	I0917 02:12:35.346628    4110 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:12:35.370011    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:12:35.382502    4110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:12:35.397499    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:12:35.408840    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:12:35.420206    4110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:12:35.442422    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:12:35.453508    4110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:12:35.468375    4110 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:12:35.471279    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:12:35.479407    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:12:35.492955    4110 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:12:35.593589    4110 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:12:35.695477    4110 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:12:35.695504    4110 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:12:35.710594    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:35.826600    4110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:12:38.101010    4110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.274345081s)
	I0917 02:12:38.101138    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:12:38.113882    4110 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:12:38.128373    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:12:38.140107    4110 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:12:38.249684    4110 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:12:38.361672    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:38.469978    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:12:38.489760    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:12:38.502395    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:38.604591    4110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:12:38.669590    4110 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:12:38.669684    4110 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:12:38.674420    4110 start.go:563] Will wait 60s for crictl version
	I0917 02:12:38.674483    4110 ssh_runner.go:195] Run: which crictl
	I0917 02:12:38.677707    4110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:12:38.702126    4110 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:12:38.702225    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:12:38.719390    4110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:12:38.757457    4110 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:12:38.799117    4110 out.go:177]   - env NO_PROXY=192.169.0.5
	I0917 02:12:38.819990    4110 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0917 02:12:38.841085    4110 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	I0917 02:12:38.862007    4110 main.go:141] libmachine: (ha-857000-m04) Calling .GetIP
	I0917 02:12:38.862240    4110 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:12:38.865326    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:12:38.874823    4110 mustload.go:65] Loading cluster: ha-857000
	I0917 02:12:38.875009    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:38.875239    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:38.875265    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:38.884252    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52315
	I0917 02:12:38.884596    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:38.885007    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:38.885024    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:38.885217    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:38.885327    4110 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:12:38.885411    4110 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:12:38.885502    4110 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 4124
	I0917 02:12:38.886472    4110 host.go:66] Checking if "ha-857000" exists ...
	I0917 02:12:38.886740    4110 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:12:38.886764    4110 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:12:38.895399    4110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52317
	I0917 02:12:38.895752    4110 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:12:38.896084    4110 main.go:141] libmachine: Using API Version  1
	I0917 02:12:38.896095    4110 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:12:38.896312    4110 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:12:38.896445    4110 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:12:38.896532    4110 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000 for IP: 192.169.0.8
	I0917 02:12:38.896538    4110 certs.go:194] generating shared ca certs ...
	I0917 02:12:38.896550    4110 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:12:38.896701    4110 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:12:38.896754    4110 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:12:38.896764    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:12:38.896789    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:12:38.896809    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:12:38.896826    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:12:38.896910    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:12:38.896963    4110 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:12:38.896974    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:12:38.897008    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:12:38.897042    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:12:38.897070    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:12:38.897139    4110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:12:38.897176    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:12:38.897196    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:12:38.897214    4110 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:38.897242    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:12:38.917488    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:12:38.937120    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:12:38.956856    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:12:38.976762    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:12:38.997198    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:12:39.018037    4110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:12:39.040033    4110 ssh_runner.go:195] Run: openssl version
	I0917 02:12:39.044757    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:12:39.053844    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:12:39.057290    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:12:39.057337    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:12:39.061592    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:12:39.070092    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:12:39.078554    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:39.082016    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:39.082086    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:12:39.086282    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:12:39.094779    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:12:39.103890    4110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:12:39.107498    4110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:12:39.107551    4110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:12:39.111799    4110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:12:39.120941    4110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:12:39.124549    4110 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 02:12:39.124586    4110 kubeadm.go:934] updating node {m04 192.169.0.8 0 v1.31.1 docker false true} ...
	I0917 02:12:39.124645    4110 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-857000-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-857000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:12:39.124713    4110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:12:39.132685    4110 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:12:39.132752    4110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0917 02:12:39.140189    4110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0917 02:12:39.153737    4110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:12:39.167480    4110 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0917 02:12:39.170335    4110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:12:39.180131    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:39.274978    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:12:39.290344    4110 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0917 02:12:39.290539    4110 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:12:39.312606    4110 out.go:177] * Verifying Kubernetes components...
	I0917 02:12:39.332523    4110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:12:39.447567    4110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:12:39.466307    4110 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:12:39.466524    4110 kapi.go:59] client config for ha-857000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/ha-857000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x673a720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 02:12:39.466571    4110 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0917 02:12:39.467449    4110 node_ready.go:35] waiting up to 6m0s for node "ha-857000-m04" to be "Ready" ...
	I0917 02:12:39.467568    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:12:39.467575    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.467585    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.467591    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.470632    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:39.969561    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:12:39.969576    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.969585    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.969590    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.972203    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:39.972562    4110 node_ready.go:49] node "ha-857000-m04" has status "Ready":"True"
	I0917 02:12:39.972573    4110 node_ready.go:38] duration metric: took 505.091961ms for node "ha-857000-m04" to be "Ready" ...
	I0917 02:12:39.972579    4110 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:12:39.972614    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0917 02:12:39.972619    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.972625    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.972629    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.976988    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:39.982728    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:39.982773    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:39.982778    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.982795    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.982801    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.985018    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:39.985518    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:39.985526    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:39.985532    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:39.985536    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:39.987300    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:40.482877    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:40.482889    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:40.482894    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:40.482898    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:40.485392    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:40.485952    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:40.485960    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:40.485965    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:40.485972    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:40.487726    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:40.984290    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:40.984330    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:40.984337    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:40.984340    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:40.986636    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:40.987126    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:40.987134    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:40.987140    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:40.987144    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:40.989077    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:41.483798    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:41.483813    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:41.483838    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:41.483842    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:41.485913    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:41.486349    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:41.486357    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:41.486363    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:41.486366    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:41.487997    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:41.984399    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:41.984423    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:41.984436    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:41.984441    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:41.987692    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:41.988563    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:41.988571    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:41.988576    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:41.988580    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:41.990387    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:41.990837    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:42.483597    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:42.483651    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:42.483720    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:42.483731    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:42.486451    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:42.487002    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:42.487009    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:42.487015    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:42.487019    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:42.488735    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:42.984178    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:42.984202    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:42.984244    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:42.984250    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:42.987573    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:42.988040    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:42.988049    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:42.988056    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:42.988060    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:42.989664    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:43.484870    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:43.484884    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:43.484891    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:43.484894    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:43.487141    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:43.487687    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:43.487695    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:43.487701    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:43.487705    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:43.489384    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:43.985004    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:43.985028    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:43.985040    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:43.985047    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:43.988376    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:43.989251    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:43.989258    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:43.989264    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:43.989274    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:43.991010    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:43.991366    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:44.483323    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:44.483341    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:44.483350    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:44.483355    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:44.486151    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:44.486714    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:44.486722    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:44.486727    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:44.486732    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:44.488452    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:44.984530    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:44.984557    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:44.984569    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:44.984574    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:44.987518    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:44.988156    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:44.988163    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:44.988169    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:44.988173    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:44.989906    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:45.484413    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:45.484429    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:45.484436    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:45.484438    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:45.486664    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:45.487158    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:45.487166    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:45.487172    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:45.487180    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:45.488811    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:45.983568    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:45.983588    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:45.983597    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:45.983601    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:45.986094    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:45.986663    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:45.986670    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:45.986676    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:45.986681    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:45.988390    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:46.484237    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:46.484252    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:46.484258    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:46.484262    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:46.486548    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:46.487112    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:46.487120    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:46.487126    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:46.487130    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:46.488764    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:46.489074    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:46.984666    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:46.984685    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:46.984693    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:46.984699    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:46.987277    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:46.987747    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:46.987754    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:46.987760    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:46.987764    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:46.989871    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:47.483189    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:47.483204    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:47.483220    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:47.483225    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:47.485536    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:47.486040    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:47.486048    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:47.486053    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:47.486077    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:47.487968    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:47.983218    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:47.983261    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:47.983271    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:47.983276    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:47.985959    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:47.986467    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:47.986476    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:47.986480    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:47.986483    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:47.988256    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:48.483839    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:48.483855    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:48.483877    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:48.483881    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:48.486127    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:48.486742    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:48.486750    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:48.486756    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:48.486763    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:48.488482    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:48.983104    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:48.983116    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:48.983123    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:48.983126    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:48.986541    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:48.986974    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:48.986982    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:48.986988    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:48.987000    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:48.989572    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:48.989840    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:49.483113    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:49.483127    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:49.483135    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:49.483138    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:49.485418    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:49.485944    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:49.485952    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:49.485958    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:49.485965    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:49.488051    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:49.983392    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:49.983418    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:49.983430    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:49.983435    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:49.990100    4110 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 02:12:49.990521    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:49.990528    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:49.990534    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:49.990551    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:49.995841    4110 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 02:12:50.484489    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:50.484507    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:50.484516    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:50.484519    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:50.487282    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:50.487803    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:50.487815    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:50.487821    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:50.487826    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:50.489538    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:50.984752    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:50.984776    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:50.984788    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:50.984796    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:50.988059    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:50.988580    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:50.988587    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:50.988593    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:50.988597    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:50.990162    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:50.990537    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:51.483827    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:51.483847    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:51.483864    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:51.483902    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:51.487323    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:51.487924    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:51.487932    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:51.487937    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:51.487942    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:51.489844    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:51.983451    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:51.983470    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:51.983482    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:51.983488    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:51.986994    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:51.987525    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:51.987535    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:51.987543    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:51.987548    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:51.989115    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:52.483263    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:52.483288    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:52.483325    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:52.483332    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:52.486347    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:52.486988    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:52.486995    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:52.487001    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:52.487005    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:52.488688    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:52.983765    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:52.983790    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:52.983801    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:52.983810    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:52.986675    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:52.987089    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:52.987119    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:52.987125    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:52.987129    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:52.988627    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:53.484927    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:53.484941    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:53.484948    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:53.484951    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:53.487216    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:53.487660    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:53.487667    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:53.487673    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:53.487676    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:53.489219    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:53.489560    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:53.984242    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:53.984261    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:53.984274    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:53.984280    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:53.986802    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:53.987318    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:53.987326    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:53.987333    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:53.987336    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:53.989152    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:54.483277    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:54.483309    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:54.483353    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:54.483368    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:54.486304    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:54.486703    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:54.486709    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:54.486715    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:54.486718    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:54.488409    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:54.984401    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:54.984421    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:54.984432    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:54.984436    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:54.987150    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:54.987731    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:54.987739    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:54.987745    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:54.987762    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:54.990093    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:55.484219    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:55.484245    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:55.484263    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:55.484270    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:55.487478    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:55.488038    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:55.488046    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:55.488052    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:55.488055    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:55.489736    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:55.490063    4110 pod_ready.go:103] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"False"
	I0917 02:12:55.983721    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:55.983738    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:55.983747    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:55.983751    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:55.986467    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:55.986910    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:55.986918    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:55.986924    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:55.986927    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:55.988668    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:56.483680    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:56.483698    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:56.483705    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:56.483708    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:56.486006    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:56.486509    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:56.486517    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:56.486523    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:56.486526    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:56.488267    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:56.984953    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:56.984979    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:56.984991    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:56.984998    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:56.988958    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:56.989556    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:56.989567    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:56.989575    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:56.989580    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:56.991555    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:57.483204    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fg65r
	I0917 02:12:57.483220    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.483244    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.483257    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.489651    4110 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 02:12:57.491669    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.491685    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.491693    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.491697    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.500745    4110 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0917 02:12:57.502366    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.502386    4110 pod_ready.go:82] duration metric: took 17.519343583s for pod "coredns-7c65d6cfc9-fg65r" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.502398    4110 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.502483    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nl5j5
	I0917 02:12:57.502497    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.502507    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.502512    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.512509    4110 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0917 02:12:57.513793    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.513807    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.513817    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.513823    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.522244    4110 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 02:12:57.522585    4110 pod_ready.go:93] pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.522595    4110 pod_ready.go:82] duration metric: took 20.190892ms for pod "coredns-7c65d6cfc9-nl5j5" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.522609    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.522650    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000
	I0917 02:12:57.522656    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.522662    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.522666    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.527526    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:57.528075    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.528084    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.528089    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.528100    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.530647    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:57.531009    4110 pod_ready.go:93] pod "etcd-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.531019    4110 pod_ready.go:82] duration metric: took 8.403704ms for pod "etcd-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.531025    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.531068    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m02
	I0917 02:12:57.531073    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.531082    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.531087    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.533324    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:57.533687    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:57.533694    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.533700    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.533704    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.535601    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:57.535875    4110 pod_ready.go:93] pod "etcd-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.535883    4110 pod_ready.go:82] duration metric: took 4.853562ms for pod "etcd-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.535902    4110 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.535944    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-857000-m03
	I0917 02:12:57.535950    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.535956    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.535960    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.537587    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:57.537964    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:57.537972    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.537978    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.537982    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.539462    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:12:57.539797    4110 pod_ready.go:93] pod "etcd-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.539805    4110 pod_ready.go:82] duration metric: took 3.894392ms for pod "etcd-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.539816    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.684040    4110 request.go:632] Waited for 144.185674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:12:57.684081    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000
	I0917 02:12:57.684104    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.684125    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.684132    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.686547    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:57.883303    4110 request.go:632] Waited for 196.17665ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.883378    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:57.883388    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:57.883398    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:57.883406    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:57.886942    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:57.887555    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:57.887569    4110 pod_ready.go:82] duration metric: took 347.737487ms for pod "kube-apiserver-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:57.887576    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.083903    4110 request.go:632] Waited for 196.258589ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:12:58.084076    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m02
	I0917 02:12:58.084095    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.084104    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.084111    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.087323    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:58.284752    4110 request.go:632] Waited for 196.829301ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:58.284841    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:58.284851    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.284863    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.284871    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.287836    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:58.288234    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:58.288243    4110 pod_ready.go:82] duration metric: took 400.655079ms for pod "kube-apiserver-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.288251    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.484581    4110 request.go:632] Waited for 196.285151ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:58.484627    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857000-m03
	I0917 02:12:58.484634    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.484670    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.484676    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.487401    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:58.683590    4110 request.go:632] Waited for 195.669934ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:58.683635    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:58.683643    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.683695    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.683709    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.687024    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:58.687397    4110 pod_ready.go:93] pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:58.687407    4110 pod_ready.go:82] duration metric: took 399.144074ms for pod "kube-apiserver-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.687414    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:58.884795    4110 request.go:632] Waited for 197.34012ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:12:58.884845    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000
	I0917 02:12:58.884854    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:58.884862    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:58.884886    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:58.887327    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:59.083807    4110 request.go:632] Waited for 195.949253ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:59.083945    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:12:59.083961    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.083973    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.083980    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.087431    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:59.087851    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:59.087864    4110 pod_ready.go:82] duration metric: took 400.438219ms for pod "kube-controller-manager-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.087874    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.283487    4110 request.go:632] Waited for 195.551174ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:12:59.283570    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m02
	I0917 02:12:59.283587    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.283598    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.283604    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.286668    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:12:59.483240    4110 request.go:632] Waited for 196.050684ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:59.483272    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:12:59.483277    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.483284    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.483287    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.485481    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:59.485790    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:59.485799    4110 pod_ready.go:82] duration metric: took 397.912163ms for pod "kube-controller-manager-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.485808    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.684196    4110 request.go:632] Waited for 198.346846ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:59.684283    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857000-m03
	I0917 02:12:59.684289    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.684295    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.684299    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.686349    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:12:59.883921    4110 request.go:632] Waited for 197.130794ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:59.883972    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:12:59.883980    4110 round_trippers.go:469] Request Headers:
	I0917 02:12:59.884030    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:12:59.884039    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:12:59.888316    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:12:59.888770    4110 pod_ready.go:93] pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:12:59.888788    4110 pod_ready.go:82] duration metric: took 402.964156ms for pod "kube-controller-manager-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:12:59.888815    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.083631    4110 request.go:632] Waited for 194.730555ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:13:00.083713    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-528ht
	I0917 02:13:00.083720    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.083728    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.083732    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.086353    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:00.285261    4110 request.go:632] Waited for 198.400376ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:13:00.285348    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m04
	I0917 02:13:00.285356    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.285364    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.285370    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.287853    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:00.288149    4110 pod_ready.go:93] pod "kube-proxy-528ht" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:00.288159    4110 pod_ready.go:82] duration metric: took 399.322905ms for pod "kube-proxy-528ht" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.288167    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.484621    4110 request.go:632] Waited for 196.39101ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:13:00.484716    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wxm
	I0917 02:13:00.484727    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.484737    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.484744    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.488045    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:00.685321    4110 request.go:632] Waited for 196.686181ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:13:00.685381    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:13:00.685438    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.685455    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.685464    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.688919    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:00.689362    4110 pod_ready.go:93] pod "kube-proxy-g9wxm" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:00.689374    4110 pod_ready.go:82] duration metric: took 401.194339ms for pod "kube-proxy-g9wxm" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.689383    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:00.884950    4110 request.go:632] Waited for 195.521785ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:13:00.884994    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskbj
	I0917 02:13:00.885018    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:00.885025    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:00.885034    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:00.887231    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:01.084761    4110 request.go:632] Waited for 197.012037ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:13:01.084795    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:13:01.084800    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.084806    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.084810    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.088892    4110 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:13:01.089243    4110 pod_ready.go:93] pod "kube-proxy-vskbj" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:01.089253    4110 pod_ready.go:82] duration metric: took 399.857039ms for pod "kube-proxy-vskbj" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.089261    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.284602    4110 request.go:632] Waited for 195.290385ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:13:01.284640    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zrqvr
	I0917 02:13:01.284645    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.284672    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.284680    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.286636    4110 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:13:01.483312    4110 request.go:632] Waited for 196.269648ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:13:01.483391    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:13:01.483403    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.483413    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.483434    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.486551    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:01.486934    4110 pod_ready.go:93] pod "kube-proxy-zrqvr" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:01.486943    4110 pod_ready.go:82] duration metric: took 397.670619ms for pod "kube-proxy-zrqvr" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.486950    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.683659    4110 request.go:632] Waited for 196.646108ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:13:01.683796    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000
	I0917 02:13:01.683807    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.683819    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.683825    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.686996    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:01.884224    4110 request.go:632] Waited for 196.55945ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:13:01.884363    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000
	I0917 02:13:01.884374    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:01.884385    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:01.884393    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:01.888135    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:01.888538    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:01.888551    4110 pod_ready.go:82] duration metric: took 401.588084ms for pod "kube-scheduler-ha-857000" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:01.888559    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:02.083387    4110 request.go:632] Waited for 194.732026ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:13:02.083482    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m02
	I0917 02:13:02.083493    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.083503    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.083512    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.087127    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:02.284704    4110 request.go:632] Waited for 197.205174ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:13:02.284756    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m02
	I0917 02:13:02.284761    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.284768    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.284773    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.287752    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:02.288038    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:02.288049    4110 pod_ready.go:82] duration metric: took 399.476957ms for pod "kube-scheduler-ha-857000-m02" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:02.288056    4110 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:02.485154    4110 request.go:632] Waited for 197.02881ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:13:02.485191    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-857000-m03
	I0917 02:13:02.485198    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.485206    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.485211    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.487672    4110 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:13:02.685336    4110 request.go:632] Waited for 197.331043ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:13:02.685388    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-857000-m03
	I0917 02:13:02.685397    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.685411    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.685417    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.688565    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:02.688910    4110 pod_ready.go:93] pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 02:13:02.688918    4110 pod_ready.go:82] duration metric: took 400.85077ms for pod "kube-scheduler-ha-857000-m03" in "kube-system" namespace to be "Ready" ...
	I0917 02:13:02.688929    4110 pod_ready.go:39] duration metric: took 22.715951136s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:13:02.688942    4110 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 02:13:02.689000    4110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:13:02.699631    4110 system_svc.go:56] duration metric: took 10.684367ms WaitForService to wait for kubelet
	I0917 02:13:02.699646    4110 kubeadm.go:582] duration metric: took 23.408872965s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:13:02.699663    4110 node_conditions.go:102] verifying NodePressure condition ...
	I0917 02:13:02.884773    4110 request.go:632] Waited for 185.024169ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0917 02:13:02.884858    4110 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0917 02:13:02.884867    4110 round_trippers.go:469] Request Headers:
	I0917 02:13:02.884878    4110 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:13:02.884887    4110 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:13:02.888704    4110 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:13:02.889505    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:13:02.889516    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:13:02.889528    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:13:02.889534    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:13:02.889537    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:13:02.889540    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:13:02.889543    4110 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:13:02.889545    4110 node_conditions.go:123] node cpu capacity is 2
	I0917 02:13:02.889549    4110 node_conditions.go:105] duration metric: took 189.878189ms to run NodePressure ...
	I0917 02:13:02.889557    4110 start.go:241] waiting for startup goroutines ...
	I0917 02:13:02.889572    4110 start.go:255] writing updated cluster config ...
	I0917 02:13:02.889954    4110 ssh_runner.go:195] Run: rm -f paused
	I0917 02:13:02.930446    4110 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0917 02:13:02.983109    4110 out.go:201] 
	W0917 02:13:03.020673    4110 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0917 02:13:03.057789    4110 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0917 02:13:03.135680    4110 out.go:177] * Done! kubectl is now configured to use "ha-857000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 17 09:12:18 ha-857000 cri-dockerd[1413]: time="2024-09-17T09:12:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc1d198ffe0b277108e5042e64604ceb887f7f17cd5814893fbf789f1ce180f0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.316039322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.316201907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.316216597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.316284213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.356401685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.356591613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.356646706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.356901392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.358210462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.358271414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.358284287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.358347315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.361819988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.361879924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.361892293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:18 ha-857000 dockerd[1166]: time="2024-09-17T09:12:18.361954784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:12:48 ha-857000 dockerd[1160]: time="2024-09-17T09:12:48.289404793Z" level=info msg="ignoring event" container=67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 09:12:48 ha-857000 dockerd[1166]: time="2024-09-17T09:12:48.290629069Z" level=info msg="shim disconnected" id=67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7 namespace=moby
	Sep 17 09:12:48 ha-857000 dockerd[1166]: time="2024-09-17T09:12:48.290966877Z" level=warning msg="cleaning up after shim disconnected" id=67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7 namespace=moby
	Sep 17 09:12:48 ha-857000 dockerd[1166]: time="2024-09-17T09:12:48.291008241Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 09:13:00 ha-857000 dockerd[1166]: time="2024-09-17T09:13:00.269678049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:13:00 ha-857000 dockerd[1166]: time="2024-09-17T09:13:00.269745426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:13:00 ha-857000 dockerd[1166]: time="2024-09-17T09:13:00.269758363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:13:00 ha-857000 dockerd[1166]: time="2024-09-17T09:13:00.269841312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d940d576a500a       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   6fb8068a5c29f       storage-provisioner
	119f2deb32f13       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   fc1d198ffe0b2       busybox-7dff88458-4jzg8
	b7aa83ae3a822       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   f4e7a7b3c65e5       coredns-7c65d6cfc9-nl5j5
	c37a677e31180       60c005f310ff3                                                                                         2 minutes ago        Running             kube-proxy                1                   5294422217d99       kube-proxy-vskbj
	3d889c7c8da7e       12968670680f4                                                                                         2 minutes ago        Running             kindnet-cni               1                   80326e6e99372       kindnet-7pf7v
	7b8b62bf7340c       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   f4cf87ea66207       coredns-7c65d6cfc9-fg65r
	67814a4514b10       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   6fb8068a5c29f       storage-provisioner
	ca7fe8ccd4c53       175ffd71cce3d                                                                                         3 minutes ago        Running             kube-controller-manager   6                   77f536a07a3a6       kube-controller-manager-ha-857000
	475dedee37228       6bab7719df100                                                                                         3 minutes ago        Running             kube-apiserver            6                   0968090389d54       kube-apiserver-ha-857000
	37d6d6479e30b       38af8ddebf499                                                                                         3 minutes ago        Running             kube-vip                  1                   2842ed202c474       kube-vip-ha-857000
	00ff29c213716       9aa1fad941575                                                                                         3 minutes ago        Running             kube-scheduler            2                   309841a63d772       kube-scheduler-ha-857000
	13b7f8a93ad49       175ffd71cce3d                                                                                         3 minutes ago        Exited              kube-controller-manager   5                   77f536a07a3a6       kube-controller-manager-ha-857000
	8c0804e78de8f       2e96e5913fc06                                                                                         3 minutes ago        Running             etcd                      2                   6cfb11ed1d6ba       etcd-ha-857000
	a18a6b023cd60       6bab7719df100                                                                                         3 minutes ago        Exited              kube-apiserver            5                   0968090389d54       kube-apiserver-ha-857000
	034279696db8f       38af8ddebf499                                                                                         8 minutes ago        Exited              kube-vip                  0                   4205e70bfa1bb       kube-vip-ha-857000
	d9fae1497b048       9aa1fad941575                                                                                         8 minutes ago        Exited              kube-scheduler            1                   37d9fe68f2e59       kube-scheduler-ha-857000
	f4f59b8c76404       2e96e5913fc06                                                                                         8 minutes ago        Exited              etcd                      1                   a23094a650513       etcd-ha-857000
	fe908ac73b00f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago       Exited              busybox                   0                   80864159ef38e       busybox-7dff88458-4jzg8
	521527f17691c       c69fa2e9cbf5f                                                                                         13 minutes ago       Exited              coredns                   0                   aa21641a5b16e       coredns-7c65d6cfc9-nl5j5
	f991c8e956d90       c69fa2e9cbf5f                                                                                         13 minutes ago       Exited              coredns                   0                   da08087b51cd9       coredns-7c65d6cfc9-fg65r
	5d84a01abd3e7       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              13 minutes ago       Exited              kindnet-cni               0                   38db6fab73655       kindnet-7pf7v
	0b03e5e488939       60c005f310ff3                                                                                         13 minutes ago       Exited              kube-proxy                0                   067bc1b2ad7fa       kube-proxy-vskbj
	
	
	==> coredns [521527f17691] <==
	[INFO] 10.244.2.2:33230 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100028s
	[INFO] 10.244.2.2:37727 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077614s
	[INFO] 10.244.2.2:51233 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090375s
	[INFO] 10.244.1.2:43082 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115984s
	[INFO] 10.244.1.2:45048 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000071244s
	[INFO] 10.244.1.2:48877 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106601s
	[INFO] 10.244.1.2:59235 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000068348s
	[INFO] 10.244.1.2:53808 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064222s
	[INFO] 10.244.1.2:54982 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064992s
	[INFO] 10.244.0.4:59177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012236s
	[INFO] 10.244.0.4:59468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096608s
	[INFO] 10.244.0.4:49953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108018s
	[INFO] 10.244.2.2:36658 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081427s
	[INFO] 10.244.1.2:53166 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140458s
	[INFO] 10.244.1.2:60442 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069729s
	[INFO] 10.244.0.4:60564 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007076s
	[INFO] 10.244.0.4:57696 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000125726s
	[INFO] 10.244.2.2:33447 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114855s
	[INFO] 10.244.2.2:49647 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000058138s
	[INFO] 10.244.2.2:55869 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00009725s
	[INFO] 10.244.1.2:49826 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096631s
	[INFO] 10.244.1.2:33376 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000046366s
	[INFO] 10.244.1.2:47184 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000084276s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7b8b62bf7340] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40424 - 46793 "HINFO IN 2652948645074262826.4033840954787183129. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019948501s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[345670875]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.718) (total time: 30000ms):
	Trace[345670875]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (09:12:48.718)
	Trace[345670875]: [30.000647992s] [30.000647992s] END
	[INFO] plugin/kubernetes: Trace[990255223]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.716) (total time: 30002ms):
	Trace[990255223]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (09:12:48.718)
	Trace[990255223]: [30.002122547s] [30.002122547s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1561533284]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.716) (total time: 30004ms):
	Trace[1561533284]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (09:12:48.720)
	Trace[1561533284]: [30.004471134s] [30.004471134s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [b7aa83ae3a82] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48468 - 41934 "HINFO IN 5248560894606224369.8303849678443807322. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019682687s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[134011415]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.720) (total time: 30000ms):
	Trace[134011415]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (09:12:48.721)
	Trace[134011415]: [30.000772699s] [30.000772699s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1931337556]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.720) (total time: 30001ms):
	Trace[1931337556]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (09:12:48.721)
	Trace[1931337556]: [30.001621273s] [30.001621273s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2093896532]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 09:12:18.720) (total time: 30001ms):
	Trace[2093896532]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (09:12:48.721)
	Trace[2093896532]: [30.001436763s] [30.001436763s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [f991c8e956d9] <==
	[INFO] 10.244.1.2:36169 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206963s
	[INFO] 10.244.1.2:33814 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000088589s
	[INFO] 10.244.1.2:57385 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.000535008s
	[INFO] 10.244.0.4:54856 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135529s
	[INFO] 10.244.0.4:47831 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.019088159s
	[INFO] 10.244.0.4:46325 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201714s
	[INFO] 10.244.0.4:45239 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000255383s
	[INFO] 10.244.0.4:55042 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000141827s
	[INFO] 10.244.2.2:47888 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108401s
	[INFO] 10.244.2.2:41486 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00044994s
	[INFO] 10.244.2.2:50623 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082841s
	[INFO] 10.244.1.2:54143 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098038s
	[INFO] 10.244.1.2:38802 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046632s
	[INFO] 10.244.0.4:39532 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.002579505s
	[INFO] 10.244.2.2:53978 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077749s
	[INFO] 10.244.2.2:60710 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092889s
	[INFO] 10.244.2.2:51255 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000044117s
	[INFO] 10.244.1.2:36996 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056219s
	[INFO] 10.244.1.2:39487 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090704s
	[INFO] 10.244.0.4:39831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131192s
	[INFO] 10.244.0.4:35770 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154922s
	[INFO] 10.244.2.2:45820 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113973s
	[INFO] 10.244.1.2:44519 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120184s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-857000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-857000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=ha-857000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T02_00_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:00:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:14:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:00:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:00:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:00:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:01:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-857000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 54854ca4cf93431694d9ad27a68ef89d
	  System UUID:                f6fb40b6-0000-0000-91c0-dbf4ea1b682c
	  Boot ID:                    a1af0517-f4c2-4eae-96db-f7479d049a6a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4jzg8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-fg65r             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-nl5j5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-857000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-7pf7v                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-857000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-857000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-vskbj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-857000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-857000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m20s                  kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  NodeHasSufficientPID     13m                    kubelet          Node ha-857000 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                    kubelet          Node ha-857000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                    kubelet          Node ha-857000 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           13m                    node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  NodeReady                13m                    kubelet          Node ha-857000 status is now: NodeReady
	  Normal  RegisteredNode           12m                    node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  RegisteredNode           9m18s                  node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  Starting                 3m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m54s (x8 over 3m54s)  kubelet          Node ha-857000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x8 over 3m54s)  kubelet          Node ha-857000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x7 over 3m54s)  kubelet          Node ha-857000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m                     node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  RegisteredNode           2m59s                  node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  RegisteredNode           2m31s                  node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	  Normal  RegisteredNode           29s                    node-controller  Node ha-857000 event: Registered Node ha-857000 in Controller
	
	
	Name:               ha-857000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-857000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=ha-857000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T02_01_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:01:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:14:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:01:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:01:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:01:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:11:41 +0000   Tue, 17 Sep 2024 09:11:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-857000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 39fe1ffb0a9e4afb9fa3c09c6b13fed7
	  System UUID:                19404b28-0000-0000-842d-d4858a62cbd3
	  Boot ID:                    625329b0-bed9-4da5-90fd-2859c5b852dc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mhjf6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-857000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-vh2h2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-857000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-857000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-zrqvr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-857000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-857000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m56s                  kube-proxy       
	  Normal   Starting                 9m22s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-857000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-857000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-857000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           12m                    node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Warning  Rebooted                 9m26s                  kubelet          Node ha-857000-m02 has been rebooted, boot id: b4c87c19-d878-45a1-b0c5-442ae4d2861b
	  Normal   NodeHasSufficientPID     9m26s                  kubelet          Node ha-857000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    9m26s                  kubelet          Node ha-857000-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 9m26s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m26s                  kubelet          Node ha-857000-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           9m18s                  node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   Starting                 3m12s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m12s (x8 over 3m12s)  kubelet          Node ha-857000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m12s (x8 over 3m12s)  kubelet          Node ha-857000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m12s (x7 over 3m12s)  kubelet          Node ha-857000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           3m                     node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   RegisteredNode           2m59s                  node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   RegisteredNode           2m31s                  node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	  Normal   RegisteredNode           29s                    node-controller  Node ha-857000-m02 event: Registered Node ha-857000-m02 in Controller
	
	
	Name:               ha-857000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-857000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=ha-857000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T02_03_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:03:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:14:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:12:01 +0000   Tue, 17 Sep 2024 09:03:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:12:01 +0000   Tue, 17 Sep 2024 09:03:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:12:01 +0000   Tue, 17 Sep 2024 09:03:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:12:01 +0000   Tue, 17 Sep 2024 09:03:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-857000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 69dae176c7914316a8660d135e30666c
	  System UUID:                3d8f47ea-0000-0000-a80b-a24a99cad96e
	  Boot ID:                    e1620cd8-3a62-4426-a27e-eeaf7b39756d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5x9l8                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-857000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-vc6z5                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-857000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-857000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-g9wxm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-857000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-857000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m35s              kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-857000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-857000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-857000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           9m18s              node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           3m                 node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           2m59s              node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   Starting                 2m39s              kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m38s              kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m38s              kubelet          Node ha-857000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m38s              kubelet          Node ha-857000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m38s              kubelet          Node ha-857000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m38s              kubelet          Node ha-857000-m03 has been rebooted, boot id: e1620cd8-3a62-4426-a27e-eeaf7b39756d
	  Normal   RegisteredNode           2m31s              node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	  Normal   RegisteredNode           29s                node-controller  Node ha-857000-m03 event: Registered Node ha-857000-m03 in Controller
	
	
	Name:               ha-857000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-857000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=ha-857000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T02_04_06_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:04:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:14:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:12:39 +0000   Tue, 17 Sep 2024 09:12:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:12:39 +0000   Tue, 17 Sep 2024 09:12:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:12:39 +0000   Tue, 17 Sep 2024 09:12:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:12:39 +0000   Tue, 17 Sep 2024 09:12:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-857000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 15c3f15f82fe4af0a76f2083dcf53a13
	  System UUID:                32bc423b-0000-0000-90a4-5417ea5ec912
	  Boot ID:                    cd10fc3d-989b-457a-8925-881b38fed37e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4jk9v       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-528ht    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 118s               kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-857000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-857000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-857000-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-857000-m04 status is now: NodeReady
	  Normal   RegisteredNode           9m18s              node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           3m                 node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           2m59s              node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   RegisteredNode           2m31s              node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	  Normal   NodeNotReady             2m20s              node-controller  Node ha-857000-m04 status is now: NodeNotReady
	  Normal   Starting                 2m                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m (x3 over 2m)    kubelet          Node ha-857000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m (x3 over 2m)    kubelet          Node ha-857000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m (x3 over 2m)    kubelet          Node ha-857000-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m (x2 over 2m)    kubelet          Node ha-857000-m04 has been rebooted, boot id: cd10fc3d-989b-457a-8925-881b38fed37e
	  Normal   NodeReady                2m (x2 over 2m)    kubelet          Node ha-857000-m04 status is now: NodeReady
	  Normal   RegisteredNode           29s                node-controller  Node ha-857000-m04 event: Registered Node ha-857000-m04 in Controller
	
	
	Name:               ha-857000-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-857000-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=ha-857000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T02_14_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:14:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857000-m05
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:14:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:14:33 +0000   Tue, 17 Sep 2024 09:14:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:14:33 +0000   Tue, 17 Sep 2024 09:14:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:14:33 +0000   Tue, 17 Sep 2024 09:14:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:14:33 +0000   Tue, 17 Sep 2024 09:14:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.9
	  Hostname:    ha-857000-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 2db6cfb0d1434c14b519f27d6d4511fd
	  System UUID:                ee9442ef-0000-0000-9576-64d480b59214
	  Boot ID:                    81080519-7f3f-4191-9d49-cd7fa64b5401
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-857000-m05                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         35s
	  kube-system                 kindnet-dmlfn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      37s
	  kube-system                 kube-apiserver-ha-857000-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-ha-857000-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-6dtwp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-scheduler-ha-857000-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-vip-ha-857000-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 33s                kube-proxy       
	  Normal  NodeAllocatableEnforced  38s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  37s (x8 over 38s)  kubelet          Node ha-857000-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 38s)  kubelet          Node ha-857000-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x7 over 38s)  kubelet          Node ha-857000-m05 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                node-controller  Node ha-857000-m05 event: Registered Node ha-857000-m05 in Controller
	  Normal  RegisteredNode           34s                node-controller  Node ha-857000-m05 event: Registered Node ha-857000-m05 in Controller
	  Normal  RegisteredNode           34s                node-controller  Node ha-857000-m05 event: Registered Node ha-857000-m05 in Controller
	  Normal  RegisteredNode           29s                node-controller  Node ha-857000-m05 event: Registered Node ha-857000-m05 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035828] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007970] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.690889] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006763] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.660573] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.226234] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.530337] systemd-fstab-generator[461]: Ignoring "noauto" option for root device
	[  +0.102427] systemd-fstab-generator[473]: Ignoring "noauto" option for root device
	[  +1.905407] systemd-fstab-generator[1088]: Ignoring "noauto" option for root device
	[  +0.264183] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +0.055811] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.051134] systemd-fstab-generator[1138]: Ignoring "noauto" option for root device
	[  +0.114709] systemd-fstab-generator[1152]: Ignoring "noauto" option for root device
	[  +2.420834] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.093862] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	[  +0.101457] systemd-fstab-generator[1390]: Ignoring "noauto" option for root device
	[  +0.112591] systemd-fstab-generator[1405]: Ignoring "noauto" option for root device
	[  +0.460313] systemd-fstab-generator[1565]: Ignoring "noauto" option for root device
	[  +6.769000] kauditd_printk_skb: 212 callbacks suppressed
	[Sep17 09:11] kauditd_printk_skb: 40 callbacks suppressed
	[Sep17 09:12] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [8c0804e78de8] <==
	{"level":"info","ts":"2024-09-17T09:12:03.647767Z","caller":"traceutil/trace.go:171","msg":"trace[1450641964] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1898; }","duration":"121.988853ms","start":"2024-09-17T09:12:03.525765Z","end":"2024-09-17T09:12:03.647754Z","steps":["trace[1450641964] 'process raft request'  (duration: 121.923204ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T09:13:08.579639Z","caller":"traceutil/trace.go:171","msg":"trace[2135392401] transaction","detail":"{read_only:false; response_revision:2205; number_of_response:1; }","duration":"108.477653ms","start":"2024-09-17T09:13:08.471150Z","end":"2024-09-17T09:13:08.579628Z","steps":["trace[2135392401] 'process raft request'  (duration: 108.403212ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T09:14:02.562336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(5207222418258591927 13314548521573537860 18406437859275119615) learners=(12916380725732009237)"}
	{"level":"info","ts":"2024-09-17T09:14:02.562708Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"b34033d60cf56515","added-peer-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2024-09-17T09:14:02.562854Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:02.563024Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:02.564080Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:02.564408Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:02.564727Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:02.564003Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:02.565427Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515","remote-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2024-09-17T09:14:02.565597Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515"}
	{"level":"warn","ts":"2024-09-17T09:14:02.611936Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"b34033d60cf56515","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-09-17T09:14:03.606134Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"b34033d60cf56515","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-09-17T09:14:03.701399Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:03.721014Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:03.731438Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:03.741642Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"b34033d60cf56515","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-17T09:14:03.741683Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515"}
	{"level":"info","ts":"2024-09-17T09:14:03.789570Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"b34033d60cf56515","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-17T09:14:03.789710Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"b34033d60cf56515"}
	{"level":"warn","ts":"2024-09-17T09:14:04.113077Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"b34033d60cf56515","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-09-17T09:14:04.609085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(5207222418258591927 12916380725732009237 13314548521573537860 18406437859275119615)"}
	{"level":"info","ts":"2024-09-17T09:14:04.609458Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-09-17T09:14:04.609879Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"b34033d60cf56515"}
	
	
	==> etcd [f4f59b8c7640] <==
	{"level":"info","ts":"2024-09-17T09:10:21.875702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:21.875849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:21.875892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:21.875927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:21.875956Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:23.177879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"warn","ts":"2024-09-17T09:10:23.692511Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275704,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:10:24.194017Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275704,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:10:24.278276Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ff70cdb626651bff","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:10:24.278324Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ff70cdb626651bff","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T09:10:24.301258Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-17T09:10:24.301488Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4843c5334ac100b7","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: no route to host"}
	{"level":"info","ts":"2024-09-17T09:10:24.470887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:24.470976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:24.470995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:24.471013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to 4843c5334ac100b7 at term 2"}
	{"level":"info","ts":"2024-09-17T09:10:24.471022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1893] sent MsgPreVote request to ff70cdb626651bff at term 2"}
	{"level":"warn","ts":"2024-09-17T09:10:24.694867Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583741136013275704,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T09:10:24.938557Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.746471868s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T09:10:24.938607Z","caller":"traceutil/trace.go:171","msg":"trace[802347161] range","detail":"{range_begin:; range_end:; }","duration":"1.746534049s","start":"2024-09-17T09:10:23.192066Z","end":"2024-09-17T09:10:24.938600Z","steps":["trace[802347161] 'agreement among raft nodes before linearized reading'  (duration: 1.746469617s)"],"step_count":1}
	{"level":"error","ts":"2024-09-17T09:10:24.938646Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: context canceled\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 09:14:39 up 4 min,  0 users,  load average: 0.50, 0.40, 0.16
	Linux ha-857000 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3d889c7c8da7] <==
	I0917 09:14:19.604686       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:14:19.604765       1 main.go:295] Handling node with IPs: map[192.169.0.9:{}]
	I0917 09:14:19.604809       1 main.go:322] Node ha-857000-m05 has CIDR [10.244.4.0/24] 
	I0917 09:14:19.604913       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:14:19.604966       1 main.go:299] handling current node
	I0917 09:14:29.603812       1 main.go:295] Handling node with IPs: map[192.169.0.9:{}]
	I0917 09:14:29.603909       1 main.go:322] Node ha-857000-m05 has CIDR [10.244.4.0/24] 
	I0917 09:14:29.604115       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:14:29.604276       1 main.go:299] handling current node
	I0917 09:14:29.604421       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:14:29.604738       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:14:29.604890       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:14:29.604901       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:14:29.605039       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:14:29.605115       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:14:39.608543       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:14:39.608583       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:14:39.608727       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:14:39.608757       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:14:39.608857       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:14:39.608886       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:14:39.609023       1 main.go:295] Handling node with IPs: map[192.169.0.9:{}]
	I0917 09:14:39.609052       1 main.go:322] Node ha-857000-m05 has CIDR [10.244.4.0/24] 
	I0917 09:14:39.609093       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:14:39.609210       1 main.go:299] handling current node
	
	
	==> kindnet [5d84a01abd3e] <==
	I0917 09:05:22.964948       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:32.966280       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:32.966503       1 main.go:299] handling current node
	I0917 09:05:32.966605       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:32.966739       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:32.966951       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:32.967059       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:32.967333       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:32.967449       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:42.964585       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:42.964999       1 main.go:299] handling current node
	I0917 09:05:42.965252       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:42.965422       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:42.965746       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:42.965829       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:42.966204       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:42.966357       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:52.965279       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0917 09:05:52.965376       1 main.go:322] Node ha-857000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:05:52.965533       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0917 09:05:52.965592       1 main.go:322] Node ha-857000-m03 has CIDR [10.244.2.0/24] 
	I0917 09:05:52.965673       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0917 09:05:52.965753       1 main.go:322] Node ha-857000-m04 has CIDR [10.244.3.0/24] 
	I0917 09:05:52.965812       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0917 09:05:52.965902       1 main.go:299] handling current node
	
	
	==> kube-apiserver [475dedee3722] <==
	I0917 09:11:36.333360       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 09:11:36.335609       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 09:11:36.383731       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 09:11:36.383763       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 09:11:36.384428       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 09:11:36.385090       1 shared_informer.go:320] Caches are synced for configmaps
	I0917 09:11:36.385168       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0917 09:11:36.385606       1 aggregator.go:171] initial CRD sync complete...
	I0917 09:11:36.385745       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 09:11:36.386077       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 09:11:36.386187       1 cache.go:39] Caches are synced for autoregister controller
	I0917 09:11:36.388938       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 09:11:36.396198       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 09:11:36.396611       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0917 09:11:36.396812       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	W0917 09:11:36.438133       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0917 09:11:36.461867       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0917 09:11:36.465355       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 09:11:36.465387       1 policy_source.go:224] refreshing policies
	I0917 09:11:36.484251       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 09:11:36.540432       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 09:11:36.548136       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0917 09:11:36.554355       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0917 09:11:37.296848       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0917 09:11:37.666999       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	
	
	==> kube-apiserver [a18a6b023cd6] <==
	I0917 09:10:52.375949       1 options.go:228] external host was not specified, using 192.169.0.5
	I0917 09:10:52.377617       1 server.go:142] Version: v1.31.1
	I0917 09:10:52.377684       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:10:52.824178       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0917 09:10:52.824356       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 09:10:52.826684       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0917 09:10:52.828510       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0917 09:10:52.829505       1 instance.go:232] Using reconciler: lease
	W0917 09:11:12.810788       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0917 09:11:12.813364       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0917 09:11:12.831731       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0917 09:11:12.831919       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [13b7f8a93ad4] <==
	I0917 09:10:53.058887       1 serving.go:386] Generated self-signed cert in-memory
	I0917 09:10:53.469010       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0917 09:10:53.469133       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:10:53.478660       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 09:10:53.478827       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 09:10:53.478677       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0917 09:10:53.479256       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0917 09:11:13.838538       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-controller-manager [ca7fe8ccd4c5] <==
	E0917 09:14:01.952809       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-t7q9m failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-t7q9m\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0917 09:14:02.067848       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-857000-m04"
	I0917 09:14:02.069752       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-857000-m05\" does not exist"
	I0917 09:14:02.082951       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-857000-m05" podCIDRs=["10.244.4.0/24"]
	I0917 09:14:02.082992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:02.083012       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:02.131285       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:02.527608       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:03.151798       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:04.753606       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:05.036390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:05.037599       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-857000-m05"
	I0917 09:14:05.051631       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:05.440757       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:05.531203       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:05.620111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:05.644082       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:10.280630       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:10.374858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:12.509235       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:23.788435       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:23.789949       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-857000-m04"
	I0917 09:14:23.799441       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:25.050665       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	I0917 09:14:33.028322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-857000-m05"
	
	
	==> kube-proxy [0b03e5e48893] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 09:00:59.069869       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 09:00:59.079118       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 09:00:59.079199       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 09:00:59.109184       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 09:00:59.109227       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 09:00:59.109245       1 server_linux.go:169] "Using iptables Proxier"
	I0917 09:00:59.111661       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 09:00:59.111847       1 server.go:483] "Version info" version="v1.31.1"
	I0917 09:00:59.111876       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:00:59.112952       1 config.go:199] "Starting service config controller"
	I0917 09:00:59.112979       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 09:00:59.112995       1 config.go:105] "Starting endpoint slice config controller"
	I0917 09:00:59.112998       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 09:00:59.113603       1 config.go:328] "Starting node config controller"
	I0917 09:00:59.113673       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 09:00:59.213587       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 09:00:59.213649       1 shared_informer.go:320] Caches are synced for service config
	I0917 09:00:59.213808       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c37a677e3118] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 09:12:19.054558       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 09:12:19.080090       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0917 09:12:19.080297       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 09:12:19.208559       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 09:12:19.208589       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 09:12:19.208607       1 server_linux.go:169] "Using iptables Proxier"
	I0917 09:12:19.212603       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 09:12:19.213076       1 server.go:483] "Version info" version="v1.31.1"
	I0917 09:12:19.213105       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:12:19.216919       1 config.go:199] "Starting service config controller"
	I0917 09:12:19.217067       1 config.go:105] "Starting endpoint slice config controller"
	I0917 09:12:19.217988       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 09:12:19.218116       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 09:12:19.228165       1 config.go:328] "Starting node config controller"
	I0917 09:12:19.228196       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 09:12:19.319175       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 09:12:19.319361       1 shared_informer.go:320] Caches are synced for service config
	I0917 09:12:19.328396       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [00ff29c21371] <==
	W0917 09:11:36.381567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 09:11:36.381612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.381875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 09:11:36.382055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.382318       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 09:11:36.382353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.382449       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 09:11:36.382484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.382718       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 09:11:36.382767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:11:36.382890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 09:11:36.383104       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0917 09:11:36.446439       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0917 09:14:02.163499       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dmlfn\": pod kindnet-dmlfn is already assigned to node \"ha-857000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-dmlfn" node="ha-857000-m05"
	E0917 09:14:02.163933       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e206acfa-4993-496a-9e1d-16406007660e(kube-system/kindnet-dmlfn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-dmlfn"
	E0917 09:14:02.164397       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dmlfn\": pod kindnet-dmlfn is already assigned to node \"ha-857000-m05\"" pod="kube-system/kindnet-dmlfn"
	E0917 09:14:02.164245       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mt6p5\": pod kindnet-mt6p5 is already assigned to node \"ha-857000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-mt6p5" node="ha-857000-m05"
	E0917 09:14:02.164750       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b68ac5c1-1d8b-4e95-a0d7-298a99ba43ae(kube-system/kindnet-mt6p5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mt6p5"
	E0917 09:14:02.164931       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mt6p5\": pod kindnet-mt6p5 is already assigned to node \"ha-857000-m05\"" pod="kube-system/kindnet-mt6p5"
	I0917 09:14:02.165129       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mt6p5" node="ha-857000-m05"
	I0917 09:14:02.165842       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dmlfn" node="ha-857000-m05"
	E0917 09:14:02.164280       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-gblm4\": pod kube-proxy-gblm4 is already assigned to node \"ha-857000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-gblm4" node="ha-857000-m05"
	E0917 09:14:02.166155       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 277d386e-b69f-4f54-9864-a58175d4f372(kube-system/kube-proxy-gblm4) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-gblm4"
	E0917 09:14:02.174773       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-gblm4\": pod kube-proxy-gblm4 is already assigned to node \"ha-857000-m05\"" pod="kube-system/kube-proxy-gblm4"
	I0917 09:14:02.175446       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-gblm4" node="ha-857000-m05"
	
	
	==> kube-scheduler [d9fae1497b04] <==
	E0917 09:09:54.047035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:01.417081       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:01.417178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:02.586956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:02.587049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:09.339944       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:09.340160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:12.375946       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:12.375997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:14.579545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:14.579979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:18.357149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:18.357192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:19.971293       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:19.971663       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:22.259174       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:22.259229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0917 09:10:24.413900       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0917 09:10:24.413975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	I0917 09:10:24.953479       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 09:10:24.953762       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0917 09:10:24.953909       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	E0917 09:10:24.953957       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	I0917 09:10:24.955052       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 09:10:24.955061       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.363896    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eecd1421-3a2f-4e48-b2b2-abcbef7869e7-cni-cfg\") pod \"kindnet-7pf7v\" (UID: \"eecd1421-3a2f-4e48-b2b2-abcbef7869e7\") " pod="kube-system/kindnet-7pf7v"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.363942    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a396757-8954-48d2-b708-dcdfbab21dc7-xtables-lock\") pod \"kube-proxy-vskbj\" (UID: \"7a396757-8954-48d2-b708-dcdfbab21dc7\") " pod="kube-system/kube-proxy-vskbj"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.363979    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a396757-8954-48d2-b708-dcdfbab21dc7-lib-modules\") pod \"kube-proxy-vskbj\" (UID: \"7a396757-8954-48d2-b708-dcdfbab21dc7\") " pod="kube-system/kube-proxy-vskbj"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.364021    1572 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d81e7b55-a14e-4dc7-9193-ebe6914cdacf-tmp\") pod \"storage-provisioner\" (UID: \"d81e7b55-a14e-4dc7-9193-ebe6914cdacf\") " pod="kube-system/storage-provisioner"
	Sep 17 09:12:17 ha-857000 kubelet[1572]: I0917 09:12:17.381710    1572 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 17 09:12:18 ha-857000 kubelet[1572]: I0917 09:12:18.732394    1572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc1d198ffe0b277108e5042e64604ceb887f7f17cd5814893fbf789f1ce180f0"
	Sep 17 09:12:18 ha-857000 kubelet[1572]: I0917 09:12:18.754870    1572 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-857000" podUID="84b805d8-9a8f-4c6f-b18f-76c98ca4776c"
	Sep 17 09:12:18 ha-857000 kubelet[1572]: I0917 09:12:18.779039    1572 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-857000"
	Sep 17 09:12:19 ha-857000 kubelet[1572]: I0917 09:12:19.228668    1572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ca8e5543181b6f9996b6d7e435c3947" path="/var/lib/kubelet/pods/3ca8e5543181b6f9996b6d7e435c3947/volumes"
	Sep 17 09:12:19 ha-857000 kubelet[1572]: I0917 09:12:19.846405    1572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-857000" podStartSLOduration=1.846388448 podStartE2EDuration="1.846388448s" podCreationTimestamp="2024-09-17 09:12:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-17 09:12:19.829429782 +0000 UTC m=+94.772487592" watchObservedRunningTime="2024-09-17 09:12:19.846388448 +0000 UTC m=+94.789446258"
	Sep 17 09:12:45 ha-857000 kubelet[1572]: E0917 09:12:45.245854    1572 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 09:12:45 ha-857000 kubelet[1572]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 09:12:45 ha-857000 kubelet[1572]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 09:12:45 ha-857000 kubelet[1572]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 09:12:45 ha-857000 kubelet[1572]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 09:12:45 ha-857000 kubelet[1572]: I0917 09:12:45.363926    1572 scope.go:117] "RemoveContainer" containerID="fcb7038a6ac9ef515ab38df1dab73586ab93030767bab4f0d4d141f34bac886f"
	Sep 17 09:12:49 ha-857000 kubelet[1572]: I0917 09:12:49.092301    1572 scope.go:117] "RemoveContainer" containerID="611759af4bf7a8b48c2739f53afaeba3cb10af70a80bf85bfb78eebe6230c491"
	Sep 17 09:12:49 ha-857000 kubelet[1572]: I0917 09:12:49.092548    1572 scope.go:117] "RemoveContainer" containerID="67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7"
	Sep 17 09:12:49 ha-857000 kubelet[1572]: E0917 09:12:49.092633    1572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d81e7b55-a14e-4dc7-9193-ebe6914cdacf)\"" pod="kube-system/storage-provisioner" podUID="d81e7b55-a14e-4dc7-9193-ebe6914cdacf"
	Sep 17 09:13:00 ha-857000 kubelet[1572]: I0917 09:13:00.226410    1572 scope.go:117] "RemoveContainer" containerID="67814a4514b10a43fcb805320685eed14ae6c928c03c950cc35f4abd031401e7"
	Sep 17 09:13:45 ha-857000 kubelet[1572]: E0917 09:13:45.246174    1572 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 09:13:45 ha-857000 kubelet[1572]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 09:13:45 ha-857000 kubelet[1572]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 09:13:45 ha-857000 kubelet[1572]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 09:13:45 ha-857000 kubelet[1572]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-857000 -n ha-857000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-857000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.77s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (136.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-316000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E0917 02:20:22.209265    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-316000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 80 (2m16.830731547s)

                                                
                                                
-- stdout --
	* [mount-start-1-316000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-1-316000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "mount-start-1-316000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d6:ac:e7:b3:62:85
	* Failed to start hyperkit VM. Running "minikube delete -p mount-start-1-316000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:c2:a0:7f:67:bd
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:c2:a0:7f:67:bd
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-316000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-316000 -n mount-start-1-316000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-316000 -n mount-start-1-316000: exit status 7 (79.598434ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 02:21:16.487747    4749 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0917 02:21:16.487768    4749 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-316000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/StartWithMountFirst (136.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (203.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-232000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-232000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-232000: (18.828885608s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-232000 --wait=true -v=8 --alsologtostderr
E0917 02:26:35.808530    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-232000 --wait=true -v=8 --alsologtostderr: exit status 90 (3m1.074722392s)

                                                
                                                
-- stdout --
	* [multinode-232000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "multinode-232000" primary control-plane node in "multinode-232000" cluster
	* Restarting existing hyperkit VM for "multinode-232000" ...
	* Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-232000-m02" worker node in "multinode-232000" cluster
	* Restarting existing hyperkit VM for "multinode-232000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.14
	* Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	  - env NO_PROXY=192.169.0.14
	* Verifying Kubernetes components...
	
	* Starting "multinode-232000-m03" worker node in "multinode-232000" cluster
	* Restarting existing hyperkit VM for "multinode-232000-m03" ...
	* Found network options:
	  - NO_PROXY=192.169.0.14,192.169.0.15
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:25:10.966836    5221 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:25:10.967023    5221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:25:10.967029    5221 out.go:358] Setting ErrFile to fd 2...
	I0917 02:25:10.967032    5221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:25:10.967205    5221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:25:10.968581    5221 out.go:352] Setting JSON to false
	I0917 02:25:10.991092    5221 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3280,"bootTime":1726561830,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 02:25:10.991240    5221 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:25:11.013247    5221 out.go:177] * [multinode-232000] minikube v1.34.0 on Darwin 14.6.1
	I0917 02:25:11.062209    5221 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:25:11.062261    5221 notify.go:220] Checking for updates...
	I0917 02:25:11.103634    5221 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:25:11.124715    5221 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 02:25:11.145307    5221 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:25:11.166695    5221 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:25:11.187672    5221 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:25:11.209239    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:25:11.209413    5221 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:25:11.210175    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:25:11.210246    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:25:11.219866    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53422
	I0917 02:25:11.220228    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:25:11.220615    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:25:11.220623    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:25:11.220831    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:25:11.220936    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:25:11.249663    5221 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 02:25:11.291450    5221 start.go:297] selected driver: hyperkit
	I0917 02:25:11.291506    5221 start.go:901] validating driver "hyperkit" against &{Name:multinode-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.31.1 ClusterName:multinode-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.16 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:25:11.291714    5221 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:25:11.291864    5221 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:25:11.292007    5221 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19648-1025/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 02:25:11.301121    5221 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 02:25:11.304883    5221 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:25:11.304904    5221 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 02:25:11.307502    5221 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:25:11.307543    5221 cni.go:84] Creating CNI manager for ""
	I0917 02:25:11.307582    5221 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 02:25:11.307654    5221 start.go:340] cluster config:
	{Name:multinode-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-232000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.16 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:25:11.307764    5221 iso.go:125] acquiring lock: {Name:mkc407d80a0f8e78f0e24d63c464bb315c80139b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:25:11.349629    5221 out.go:177] * Starting "multinode-232000" primary control-plane node in "multinode-232000" cluster
	I0917 02:25:11.370307    5221 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:25:11.370365    5221 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 02:25:11.370382    5221 cache.go:56] Caching tarball of preloaded images
	I0917 02:25:11.370551    5221 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:25:11.370565    5221 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:25:11.370702    5221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/config.json ...
	I0917 02:25:11.371399    5221 start.go:360] acquireMachinesLock for multinode-232000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:25:11.371516    5221 start.go:364] duration metric: took 86.283µs to acquireMachinesLock for "multinode-232000"
	I0917 02:25:11.371547    5221 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:25:11.371561    5221 fix.go:54] fixHost starting: 
	I0917 02:25:11.371905    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:25:11.371930    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:25:11.380462    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53424
	I0917 02:25:11.380782    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:25:11.381229    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:25:11.381250    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:25:11.381460    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:25:11.381636    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:25:11.381749    5221 main.go:141] libmachine: (multinode-232000) Calling .GetState
	I0917 02:25:11.381840    5221 main.go:141] libmachine: (multinode-232000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:25:11.381925    5221 main.go:141] libmachine: (multinode-232000) DBG | hyperkit pid from json: 4780
	I0917 02:25:11.382861    5221 main.go:141] libmachine: (multinode-232000) DBG | hyperkit pid 4780 missing from process table
	I0917 02:25:11.382886    5221 fix.go:112] recreateIfNeeded on multinode-232000: state=Stopped err=<nil>
	I0917 02:25:11.382907    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	W0917 02:25:11.382987    5221 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:25:11.424651    5221 out.go:177] * Restarting existing hyperkit VM for "multinode-232000" ...
	I0917 02:25:11.445479    5221 main.go:141] libmachine: (multinode-232000) Calling .Start
	I0917 02:25:11.445739    5221 main.go:141] libmachine: (multinode-232000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:25:11.445785    5221 main.go:141] libmachine: (multinode-232000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/hyperkit.pid
	I0917 02:25:11.447100    5221 main.go:141] libmachine: (multinode-232000) DBG | hyperkit pid 4780 missing from process table
	I0917 02:25:11.447123    5221 main.go:141] libmachine: (multinode-232000) DBG | pid 4780 is in state "Stopped"
	I0917 02:25:11.447156    5221 main.go:141] libmachine: (multinode-232000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/hyperkit.pid...
	I0917 02:25:11.447293    5221 main.go:141] libmachine: (multinode-232000) DBG | Using UUID 8074f2a2-7362-42ba-b144-29938f44cef0
	I0917 02:25:11.553992    5221 main.go:141] libmachine: (multinode-232000) DBG | Generated MAC 5a:1f:11:e5:b7:54
	I0917 02:25:11.554016    5221 main.go:141] libmachine: (multinode-232000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-232000
	I0917 02:25:11.554135    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8074f2a2-7362-42ba-b144-29938f44cef0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ac9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0917 02:25:11.554162    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8074f2a2-7362-42ba-b144-29938f44cef0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ac9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0917 02:25:11.554248    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8074f2a2-7362-42ba-b144-29938f44cef0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/multinode-232000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/bzimage,/Users/jenkins/minikube-integration/1964
8-1025/.minikube/machines/multinode-232000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-232000"}
	I0917 02:25:11.554284    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8074f2a2-7362-42ba-b144-29938f44cef0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/multinode-232000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-232000"
	I0917 02:25:11.554293    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:25:11.555778    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 DEBUG: hyperkit: Pid is 5233
	I0917 02:25:11.556244    5221 main.go:141] libmachine: (multinode-232000) DBG | Attempt 0
	I0917 02:25:11.556266    5221 main.go:141] libmachine: (multinode-232000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:25:11.556339    5221 main.go:141] libmachine: (multinode-232000) DBG | hyperkit pid from json: 5233
	I0917 02:25:11.558031    5221 main.go:141] libmachine: (multinode-232000) DBG | Searching for 5a:1f:11:e5:b7:54 in /var/db/dhcpd_leases ...
	I0917 02:25:11.558105    5221 main.go:141] libmachine: (multinode-232000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0917 02:25:11.558119    5221 main.go:141] libmachine: (multinode-232000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66e94ae5}
	I0917 02:25:11.558131    5221 main.go:141] libmachine: (multinode-232000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9bd8}
	I0917 02:25:11.558140    5221 main.go:141] libmachine: (multinode-232000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9b9c}
	I0917 02:25:11.558147    5221 main.go:141] libmachine: (multinode-232000) DBG | Found match: 5a:1f:11:e5:b7:54
	I0917 02:25:11.558151    5221 main.go:141] libmachine: (multinode-232000) DBG | IP: 192.169.0.14
	I0917 02:25:11.558204    5221 main.go:141] libmachine: (multinode-232000) Calling .GetConfigRaw
	I0917 02:25:11.558782    5221 main.go:141] libmachine: (multinode-232000) Calling .GetIP
	I0917 02:25:11.558959    5221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/config.json ...
	I0917 02:25:11.559327    5221 machine.go:93] provisionDockerMachine start ...
	I0917 02:25:11.559338    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:25:11.559483    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:11.559597    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:11.559691    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:11.559795    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:11.559920    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:11.560074    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:25:11.560354    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0917 02:25:11.560367    5221 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:25:11.563754    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:25:11.616965    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:25:11.617672    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:25:11.617692    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:25:11.617710    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:25:11.617722    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:25:11.999536    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:25:11.999551    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:25:12.114206    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:25:12.114221    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:25:12.114232    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:25:12.114241    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:25:12.115134    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:25:12.115147    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:25:17.701802    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:17 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:25:17.701861    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:17 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:25:17.701870    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:17 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:25:17.725838    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:17 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:25:46.633605    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:25:46.633620    5221 main.go:141] libmachine: (multinode-232000) Calling .GetMachineName
	I0917 02:25:46.633767    5221 buildroot.go:166] provisioning hostname "multinode-232000"
	I0917 02:25:46.633779    5221 main.go:141] libmachine: (multinode-232000) Calling .GetMachineName
	I0917 02:25:46.633891    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:46.633980    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:46.634090    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:46.634178    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:46.634291    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:46.634444    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:25:46.634591    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0917 02:25:46.634599    5221 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-232000 && echo "multinode-232000" | sudo tee /etc/hostname
	I0917 02:25:46.702276    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-232000
	
	I0917 02:25:46.702294    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:46.702424    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:46.702528    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:46.702615    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:46.702704    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:46.702841    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:25:46.702983    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0917 02:25:46.702994    5221 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-232000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-232000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-232000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:25:46.767411    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:25:46.767432    5221 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:25:46.767453    5221 buildroot.go:174] setting up certificates
	I0917 02:25:46.767460    5221 provision.go:84] configureAuth start
	I0917 02:25:46.767485    5221 main.go:141] libmachine: (multinode-232000) Calling .GetMachineName
	I0917 02:25:46.767628    5221 main.go:141] libmachine: (multinode-232000) Calling .GetIP
	I0917 02:25:46.767755    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:46.767837    5221 provision.go:143] copyHostCerts
	I0917 02:25:46.767869    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:25:46.767938    5221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:25:46.767946    5221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:25:46.768090    5221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:25:46.768309    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:25:46.768354    5221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:25:46.768359    5221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:25:46.768436    5221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:25:46.768577    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:25:46.768614    5221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:25:46.768619    5221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:25:46.768694    5221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:25:46.768828    5221 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.multinode-232000 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-232000]
	I0917 02:25:46.944935    5221 provision.go:177] copyRemoteCerts
	I0917 02:25:46.944993    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:25:46.945011    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:46.945139    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:46.945235    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:46.945321    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:46.945415    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/id_rsa Username:docker}
	I0917 02:25:46.983014    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:25:46.983083    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:25:47.002033    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:25:47.002091    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0917 02:25:47.020618    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:25:47.020696    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:25:47.039960    5221 provision.go:87] duration metric: took 272.472463ms to configureAuth
	I0917 02:25:47.039975    5221 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:25:47.040145    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:25:47.040159    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:25:47.040295    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:47.040391    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:47.040474    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:47.040549    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:47.040628    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:47.040745    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:25:47.040871    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0917 02:25:47.040878    5221 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:25:47.099959    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:25:47.099972    5221 buildroot.go:70] root file system type: tmpfs
	I0917 02:25:47.100043    5221 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:25:47.100059    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:47.100186    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:47.100273    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:47.100358    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:47.100447    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:47.100591    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:25:47.100732    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0917 02:25:47.100773    5221 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:25:47.168880    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:25:47.168903    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:47.169036    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:47.169129    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:47.169224    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:47.169315    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:47.169456    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:25:47.169611    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0917 02:25:47.169623    5221 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:25:48.818488    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:25:48.818504    5221 machine.go:96] duration metric: took 37.258998272s to provisionDockerMachine
	I0917 02:25:48.818516    5221 start.go:293] postStartSetup for "multinode-232000" (driver="hyperkit")
	I0917 02:25:48.818523    5221 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:25:48.818536    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:25:48.818724    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:25:48.818738    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:48.818837    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:48.818933    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:48.819013    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:48.819106    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/id_rsa Username:docker}
	I0917 02:25:48.856274    5221 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:25:48.859324    5221 command_runner.go:130] > NAME=Buildroot
	I0917 02:25:48.859336    5221 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0917 02:25:48.859343    5221 command_runner.go:130] > ID=buildroot
	I0917 02:25:48.859349    5221 command_runner.go:130] > VERSION_ID=2023.02.9
	I0917 02:25:48.859355    5221 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0917 02:25:48.859449    5221 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:25:48.859461    5221 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:25:48.859554    5221 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:25:48.859741    5221 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:25:48.859747    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:25:48.859958    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:25:48.867363    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:25:48.886814    5221 start.go:296] duration metric: took 68.289508ms for postStartSetup
	I0917 02:25:48.886835    5221 fix.go:56] duration metric: took 37.515109536s for fixHost
	I0917 02:25:48.886846    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:48.886983    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:48.887084    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:48.887176    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:48.887267    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:48.887393    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:25:48.887527    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0917 02:25:48.887534    5221 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:25:48.946757    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726565149.085879655
	
	I0917 02:25:48.946768    5221 fix.go:216] guest clock: 1726565149.085879655
	I0917 02:25:48.946773    5221 fix.go:229] Guest: 2024-09-17 02:25:49.085879655 -0700 PDT Remote: 2024-09-17 02:25:48.886837 -0700 PDT m=+37.955385830 (delta=199.042655ms)
	I0917 02:25:48.946795    5221 fix.go:200] guest clock delta is within tolerance: 199.042655ms
	I0917 02:25:48.946799    5221 start.go:83] releasing machines lock for "multinode-232000", held for 37.575103856s
	I0917 02:25:48.946821    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:25:48.946963    5221 main.go:141] libmachine: (multinode-232000) Calling .GetIP
	I0917 02:25:48.947075    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:25:48.947409    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:25:48.947517    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:25:48.947603    5221 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:25:48.947632    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:48.947664    5221 ssh_runner.go:195] Run: cat /version.json
	I0917 02:25:48.947677    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:48.947708    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:48.947770    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:48.947790    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:48.947870    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:48.947884    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:48.947990    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/id_rsa Username:docker}
	I0917 02:25:48.948007    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:48.948092    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/id_rsa Username:docker}
	I0917 02:25:48.978305    5221 command_runner.go:130] > {"iso_version": "v1.34.0-1726415472-19646", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "7dc55c0008a982396eb57879cd4eab23ab96531e"}
	I0917 02:25:48.978483    5221 ssh_runner.go:195] Run: systemctl --version
	I0917 02:25:48.982966    5221 command_runner.go:130] > systemd 252 (252)
	I0917 02:25:48.982985    5221 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0917 02:25:48.983166    5221 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:25:49.036752    5221 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0917 02:25:49.036937    5221 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0917 02:25:49.036978    5221 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:25:49.037083    5221 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:25:49.051276    5221 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0917 02:25:49.051288    5221 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:25:49.051294    5221 start.go:495] detecting cgroup driver to use...
	I0917 02:25:49.051395    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:25:49.066177    5221 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0917 02:25:49.066484    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:25:49.075357    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:25:49.084087    5221 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:25:49.084134    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:25:49.092689    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:25:49.101467    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:25:49.110202    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:25:49.118748    5221 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:25:49.127704    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:25:49.136332    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:25:49.145077    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:25:49.153808    5221 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:25:49.161540    5221 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0917 02:25:49.161787    5221 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:25:49.169757    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:25:49.271229    5221 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:25:49.289689    5221 start.go:495] detecting cgroup driver to use...
	I0917 02:25:49.289782    5221 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:25:49.304789    5221 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0917 02:25:49.304802    5221 command_runner.go:130] > [Unit]
	I0917 02:25:49.304807    5221 command_runner.go:130] > Description=Docker Application Container Engine
	I0917 02:25:49.304812    5221 command_runner.go:130] > Documentation=https://docs.docker.com
	I0917 02:25:49.304816    5221 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0917 02:25:49.304820    5221 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0917 02:25:49.304824    5221 command_runner.go:130] > StartLimitBurst=3
	I0917 02:25:49.304828    5221 command_runner.go:130] > StartLimitIntervalSec=60
	I0917 02:25:49.304831    5221 command_runner.go:130] > [Service]
	I0917 02:25:49.304834    5221 command_runner.go:130] > Type=notify
	I0917 02:25:49.304838    5221 command_runner.go:130] > Restart=on-failure
	I0917 02:25:49.304844    5221 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0917 02:25:49.304856    5221 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0917 02:25:49.304862    5221 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0917 02:25:49.304867    5221 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0917 02:25:49.304873    5221 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0917 02:25:49.304879    5221 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0917 02:25:49.304885    5221 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0917 02:25:49.304893    5221 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0917 02:25:49.304899    5221 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0917 02:25:49.304905    5221 command_runner.go:130] > ExecStart=
	I0917 02:25:49.304917    5221 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0917 02:25:49.304921    5221 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0917 02:25:49.304935    5221 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0917 02:25:49.304941    5221 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0917 02:25:49.304949    5221 command_runner.go:130] > LimitNOFILE=infinity
	I0917 02:25:49.304953    5221 command_runner.go:130] > LimitNPROC=infinity
	I0917 02:25:49.304962    5221 command_runner.go:130] > LimitCORE=infinity
	I0917 02:25:49.304967    5221 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0917 02:25:49.304971    5221 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0917 02:25:49.304974    5221 command_runner.go:130] > TasksMax=infinity
	I0917 02:25:49.304978    5221 command_runner.go:130] > TimeoutStartSec=0
	I0917 02:25:49.304983    5221 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0917 02:25:49.304987    5221 command_runner.go:130] > Delegate=yes
	I0917 02:25:49.304991    5221 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0917 02:25:49.304995    5221 command_runner.go:130] > KillMode=process
	I0917 02:25:49.304998    5221 command_runner.go:130] > [Install]
	I0917 02:25:49.305007    5221 command_runner.go:130] > WantedBy=multi-user.target
	I0917 02:25:49.305089    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:25:49.316713    5221 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:25:49.333410    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:25:49.344788    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:25:49.355660    5221 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:25:49.376347    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:25:49.387282    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:25:49.402080    5221 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0917 02:25:49.402454    5221 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:25:49.405272    5221 command_runner.go:130] > /usr/bin/cri-dockerd
	I0917 02:25:49.405493    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:25:49.412708    5221 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:25:49.426157    5221 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:25:49.525562    5221 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:25:49.626996    5221 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:25:49.627079    5221 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:25:49.641067    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:25:49.732233    5221 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:25:52.043365    5221 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.311103221s)
	I0917 02:25:52.043436    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:25:52.054133    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:25:52.065481    5221 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:25:52.170411    5221 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:25:52.267424    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:25:52.371852    5221 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:25:52.385452    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:25:52.396451    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:25:52.500601    5221 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:25:52.555221    5221 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:25:52.555327    5221 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:25:52.559285    5221 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0917 02:25:52.559296    5221 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0917 02:25:52.559301    5221 command_runner.go:130] > Device: 0,22	Inode: 769         Links: 1
	I0917 02:25:52.559306    5221 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0917 02:25:52.559310    5221 command_runner.go:130] > Access: 2024-09-17 09:25:52.651459721 +0000
	I0917 02:25:52.559324    5221 command_runner.go:130] > Modify: 2024-09-17 09:25:52.651459721 +0000
	I0917 02:25:52.559330    5221 command_runner.go:130] > Change: 2024-09-17 09:25:52.653459677 +0000
	I0917 02:25:52.559333    5221 command_runner.go:130] >  Birth: -
	I0917 02:25:52.559359    5221 start.go:563] Will wait 60s for crictl version
	I0917 02:25:52.559412    5221 ssh_runner.go:195] Run: which crictl
	I0917 02:25:52.562238    5221 command_runner.go:130] > /usr/bin/crictl
	I0917 02:25:52.562381    5221 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:25:52.586241    5221 command_runner.go:130] > Version:  0.1.0
	I0917 02:25:52.586254    5221 command_runner.go:130] > RuntimeName:  docker
	I0917 02:25:52.586274    5221 command_runner.go:130] > RuntimeVersion:  27.2.1
	I0917 02:25:52.586362    5221 command_runner.go:130] > RuntimeApiVersion:  v1
	I0917 02:25:52.587478    5221 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:25:52.587565    5221 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:25:52.602824    5221 command_runner.go:130] > 27.2.1
	I0917 02:25:52.603853    5221 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:25:52.623426    5221 command_runner.go:130] > 27.2.1
	I0917 02:25:52.667611    5221 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:25:52.667657    5221 main.go:141] libmachine: (multinode-232000) Calling .GetIP
	I0917 02:25:52.668059    5221 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:25:52.672597    5221 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:25:52.682201    5221 kubeadm.go:883] updating cluster {Name:multinode-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.31.1 ClusterName:multinode-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.16 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 02:25:52.682298    5221 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:25:52.682365    5221 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:25:52.694663    5221 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0917 02:25:52.694695    5221 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0917 02:25:52.694700    5221 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 02:25:52.694708    5221 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0917 02:25:52.694713    5221 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0917 02:25:52.694717    5221 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0917 02:25:52.694723    5221 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0917 02:25:52.694726    5221 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0917 02:25:52.694730    5221 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:25:52.694734    5221 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0917 02:25:52.695389    5221 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:25:52.695402    5221 docker.go:615] Images already preloaded, skipping extraction
	I0917 02:25:52.695485    5221 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:25:52.708148    5221 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0917 02:25:52.708161    5221 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 02:25:52.708166    5221 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0917 02:25:52.708170    5221 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0917 02:25:52.708173    5221 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0917 02:25:52.708177    5221 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0917 02:25:52.708193    5221 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0917 02:25:52.708199    5221 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0917 02:25:52.708202    5221 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:25:52.708206    5221 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0917 02:25:52.708835    5221 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:25:52.708852    5221 cache_images.go:84] Images are preloaded, skipping loading
	I0917 02:25:52.708861    5221 kubeadm.go:934] updating node { 192.169.0.14 8443 v1.31.1 docker true true} ...
	I0917 02:25:52.708941    5221 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-232000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:25:52.709025    5221 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 02:25:52.741672    5221 command_runner.go:130] > cgroupfs
	I0917 02:25:52.742672    5221 cni.go:84] Creating CNI manager for ""
	I0917 02:25:52.742682    5221 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 02:25:52.742698    5221 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 02:25:52.742716    5221 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.14 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-232000 NodeName:multinode-232000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 02:25:52.742802    5221 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-232000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 02:25:52.742876    5221 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:25:52.750643    5221 command_runner.go:130] > kubeadm
	I0917 02:25:52.750652    5221 command_runner.go:130] > kubectl
	I0917 02:25:52.750656    5221 command_runner.go:130] > kubelet
	I0917 02:25:52.750671    5221 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:25:52.750724    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 02:25:52.758162    5221 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0917 02:25:52.771785    5221 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:25:52.784981    5221 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0917 02:25:52.798896    5221 ssh_runner.go:195] Run: grep 192.169.0.14	control-plane.minikube.internal$ /etc/hosts
	I0917 02:25:52.801715    5221 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:25:52.810997    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:25:52.907165    5221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:25:52.922386    5221 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000 for IP: 192.169.0.14
	I0917 02:25:52.922399    5221 certs.go:194] generating shared ca certs ...
	I0917 02:25:52.922409    5221 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:25:52.922601    5221 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:25:52.922675    5221 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:25:52.922690    5221 certs.go:256] generating profile certs ...
	I0917 02:25:52.922796    5221 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/client.key
	I0917 02:25:52.922874    5221 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/apiserver.key.4fa80143
	I0917 02:25:52.922951    5221 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/proxy-client.key
	I0917 02:25:52.922959    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:25:52.922979    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:25:52.922997    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:25:52.923014    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:25:52.923031    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 02:25:52.923065    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 02:25:52.923099    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 02:25:52.923118    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 02:25:52.923222    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:25:52.923266    5221 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:25:52.923275    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:25:52.923306    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:25:52.923335    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:25:52.923361    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:25:52.923424    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:25:52.923461    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:25:52.923481    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:25:52.923497    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:25:52.923958    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:25:52.949310    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:25:52.973114    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:25:52.998495    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:25:53.022314    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 02:25:53.041667    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 02:25:53.060613    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:25:53.079637    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 02:25:53.099094    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:25:53.117840    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:25:53.137106    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:25:53.156395    5221 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 02:25:53.170149    5221 ssh_runner.go:195] Run: openssl version
	I0917 02:25:53.174089    5221 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0917 02:25:53.174290    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:25:53.183220    5221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:25:53.186389    5221 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:25:53.186491    5221 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:25:53.186529    5221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:25:53.190436    5221 command_runner.go:130] > 3ec20f2e
	I0917 02:25:53.190652    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:25:53.199463    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:25:53.208313    5221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:25:53.211525    5221 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:25:53.211736    5221 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:25:53.211781    5221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:25:53.215664    5221 command_runner.go:130] > b5213941
	I0917 02:25:53.215865    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:25:53.224761    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:25:53.233608    5221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:25:53.236787    5221 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:25:53.236965    5221 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:25:53.237013    5221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:25:53.240934    5221 command_runner.go:130] > 51391683
	I0917 02:25:53.241093    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:25:53.250009    5221 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:25:53.253211    5221 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:25:53.253220    5221 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0917 02:25:53.253225    5221 command_runner.go:130] > Device: 253,1	Inode: 1052957     Links: 1
	I0917 02:25:53.253230    5221 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0917 02:25:53.253236    5221 command_runner.go:130] > Access: 2024-09-17 09:21:47.498539680 +0000
	I0917 02:25:53.253240    5221 command_runner.go:130] > Modify: 2024-09-17 09:21:47.498539680 +0000
	I0917 02:25:53.253243    5221 command_runner.go:130] > Change: 2024-09-17 09:21:47.498539680 +0000
	I0917 02:25:53.253248    5221 command_runner.go:130] >  Birth: 2024-09-17 09:21:47.498539680 +0000
	I0917 02:25:53.253359    5221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:25:53.257383    5221 command_runner.go:130] > Certificate will not expire
	I0917 02:25:53.257582    5221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:25:53.261570    5221 command_runner.go:130] > Certificate will not expire
	I0917 02:25:53.261661    5221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:25:53.266080    5221 command_runner.go:130] > Certificate will not expire
	I0917 02:25:53.266153    5221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:25:53.270348    5221 command_runner.go:130] > Certificate will not expire
	I0917 02:25:53.270434    5221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:25:53.274559    5221 command_runner.go:130] > Certificate will not expire
	I0917 02:25:53.274656    5221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:25:53.278684    5221 command_runner.go:130] > Certificate will not expire
	I0917 02:25:53.278858    5221 kubeadm.go:392] StartCluster: {Name:multinode-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:multinode-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.16 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:25:53.278995    5221 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 02:25:53.291333    5221 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 02:25:53.299530    5221 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0917 02:25:53.299542    5221 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0917 02:25:53.299547    5221 command_runner.go:130] > /var/lib/minikube/etcd:
	I0917 02:25:53.299550    5221 command_runner.go:130] > member
	I0917 02:25:53.299588    5221 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 02:25:53.299601    5221 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 02:25:53.299653    5221 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 02:25:53.307627    5221 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:25:53.307940    5221 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-232000" does not appear in /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:25:53.308034    5221 kubeconfig.go:62] /Users/jenkins/minikube-integration/19648-1025/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-232000" cluster setting kubeconfig missing "multinode-232000" context setting]
	I0917 02:25:53.308220    5221 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:25:53.309013    5221 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:25:53.309237    5221 kapi.go:59] client config for multinode-232000: &rest.Config{Host:"https://192.169.0.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x410b720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:25:53.309564    5221 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 02:25:53.309777    5221 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 02:25:53.317711    5221 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.14
	I0917 02:25:53.317731    5221 kubeadm.go:1160] stopping kube-system containers ...
	I0917 02:25:53.317799    5221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 02:25:53.331841    5221 command_runner.go:130] > 8b2f4ea197c5
	I0917 02:25:53.331853    5221 command_runner.go:130] > f7ccad53a257
	I0917 02:25:53.331857    5221 command_runner.go:130] > 64f91acf5d83
	I0917 02:25:53.331860    5221 command_runner.go:130] > 84e22c05755c
	I0917 02:25:53.331863    5221 command_runner.go:130] > 3dc3bd4da839
	I0917 02:25:53.331868    5221 command_runner.go:130] > 96e8ac7b181c
	I0917 02:25:53.331871    5221 command_runner.go:130] > b6a933d5abb7
	I0917 02:25:53.331875    5221 command_runner.go:130] > 90f44d581694
	I0917 02:25:53.331893    5221 command_runner.go:130] > ab8e6362f133
	I0917 02:25:53.331899    5221 command_runner.go:130] > 5db9fa24f683
	I0917 02:25:53.331908    5221 command_runner.go:130] > 8e788bff41ec
	I0917 02:25:53.331911    5221 command_runner.go:130] > ff3a45c5df2e
	I0917 02:25:53.331924    5221 command_runner.go:130] > f9ddf66585b5
	I0917 02:25:53.331929    5221 command_runner.go:130] > 8e04470f77bc
	I0917 02:25:53.331933    5221 command_runner.go:130] > 77ac0fcdf71b
	I0917 02:25:53.331936    5221 command_runner.go:130] > 8998ef0cd2fb
	I0917 02:25:53.331952    5221 docker.go:483] Stopping containers: [8b2f4ea197c5 f7ccad53a257 64f91acf5d83 84e22c05755c 3dc3bd4da839 96e8ac7b181c b6a933d5abb7 90f44d581694 ab8e6362f133 5db9fa24f683 8e788bff41ec ff3a45c5df2e f9ddf66585b5 8e04470f77bc 77ac0fcdf71b 8998ef0cd2fb]
	I0917 02:25:53.332033    5221 ssh_runner.go:195] Run: docker stop 8b2f4ea197c5 f7ccad53a257 64f91acf5d83 84e22c05755c 3dc3bd4da839 96e8ac7b181c b6a933d5abb7 90f44d581694 ab8e6362f133 5db9fa24f683 8e788bff41ec ff3a45c5df2e f9ddf66585b5 8e04470f77bc 77ac0fcdf71b 8998ef0cd2fb
	I0917 02:25:53.346953    5221 command_runner.go:130] > 8b2f4ea197c5
	I0917 02:25:53.346973    5221 command_runner.go:130] > f7ccad53a257
	I0917 02:25:53.346977    5221 command_runner.go:130] > 64f91acf5d83
	I0917 02:25:53.346980    5221 command_runner.go:130] > 84e22c05755c
	I0917 02:25:53.346983    5221 command_runner.go:130] > 3dc3bd4da839
	I0917 02:25:53.346986    5221 command_runner.go:130] > 96e8ac7b181c
	I0917 02:25:53.346989    5221 command_runner.go:130] > b6a933d5abb7
	I0917 02:25:53.346992    5221 command_runner.go:130] > 90f44d581694
	I0917 02:25:53.346995    5221 command_runner.go:130] > ab8e6362f133
	I0917 02:25:53.346999    5221 command_runner.go:130] > 5db9fa24f683
	I0917 02:25:53.347003    5221 command_runner.go:130] > 8e788bff41ec
	I0917 02:25:53.347385    5221 command_runner.go:130] > ff3a45c5df2e
	I0917 02:25:53.347392    5221 command_runner.go:130] > f9ddf66585b5
	I0917 02:25:53.347396    5221 command_runner.go:130] > 8e04470f77bc
	I0917 02:25:53.347572    5221 command_runner.go:130] > 77ac0fcdf71b
	I0917 02:25:53.347579    5221 command_runner.go:130] > 8998ef0cd2fb
	I0917 02:25:53.348813    5221 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 02:25:53.362209    5221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 02:25:53.370338    5221 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0917 02:25:53.370349    5221 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0917 02:25:53.370355    5221 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0917 02:25:53.370361    5221 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 02:25:53.370416    5221 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 02:25:53.370424    5221 kubeadm.go:157] found existing configuration files:
	
	I0917 02:25:53.370486    5221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 02:25:53.378115    5221 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 02:25:53.378130    5221 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 02:25:53.378173    5221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 02:25:53.385977    5221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 02:25:53.393671    5221 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 02:25:53.393699    5221 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 02:25:53.393755    5221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 02:25:53.401670    5221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 02:25:53.409227    5221 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 02:25:53.409251    5221 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 02:25:53.409297    5221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 02:25:53.417259    5221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 02:25:53.424729    5221 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 02:25:53.424749    5221 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 02:25:53.424798    5221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 02:25:53.432828    5221 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 02:25:53.440635    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:25:53.510000    5221 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 02:25:53.510169    5221 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0917 02:25:53.510375    5221 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0917 02:25:53.510537    5221 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 02:25:53.510763    5221 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0917 02:25:53.510941    5221 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0917 02:25:53.511194    5221 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0917 02:25:53.511398    5221 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0917 02:25:53.511569    5221 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0917 02:25:53.511729    5221 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 02:25:53.511890    5221 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 02:25:53.512068    5221 command_runner.go:130] > [certs] Using the existing "sa" key
	I0917 02:25:53.513110    5221 command_runner.go:130] ! W0917 09:25:53.649117    1323 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:53.513128    5221 command_runner.go:130] ! W0917 09:25:53.650342    1323 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:53.513142    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:25:53.549689    5221 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 02:25:53.933539    5221 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 02:25:54.068325    5221 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 02:25:54.205343    5221 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 02:25:54.330285    5221 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 02:25:54.568018    5221 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 02:25:54.570199    5221 command_runner.go:130] ! W0917 09:25:53.690144    1327 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:54.570217    5221 command_runner.go:130] ! W0917 09:25:53.690801    1327 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:54.570234    5221 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.057078246s)
	I0917 02:25:54.570253    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:25:54.620172    5221 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 02:25:54.624895    5221 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 02:25:54.624904    5221 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0917 02:25:54.727485    5221 command_runner.go:130] ! W0917 09:25:54.748587    1332 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:54.727507    5221 command_runner.go:130] ! W0917 09:25:54.749120    1332 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:54.727531    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:25:54.769003    5221 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 02:25:54.769017    5221 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 02:25:54.771732    5221 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 02:25:54.771750    5221 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 02:25:54.779487    5221 command_runner.go:130] ! W0917 09:25:54.910552    1360 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:54.779510    5221 command_runner.go:130] ! W0917 09:25:54.911046    1360 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:54.779524    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:25:54.846913    5221 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 02:25:54.849869    5221 command_runner.go:130] ! W0917 09:25:54.986774    1368 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:54.849893    5221 command_runner.go:130] ! W0917 09:25:54.987728    1368 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:54.849929    5221 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:25:54.850003    5221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:25:55.350188    5221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:25:55.851457    5221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:25:55.865476    5221 command_runner.go:130] > 1651
	I0917 02:25:55.865505    5221 api_server.go:72] duration metric: took 1.015586828s to wait for apiserver process to appear ...
	I0917 02:25:55.865512    5221 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:25:55.865528    5221 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0917 02:25:58.289702    5221 api_server.go:279] https://192.169.0.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 02:25:58.289718    5221 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 02:25:58.289726    5221 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0917 02:25:58.322928    5221 api_server.go:279] https://192.169.0.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 02:25:58.322948    5221 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 02:25:58.366073    5221 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0917 02:25:58.372850    5221 api_server.go:279] https://192.169.0.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 02:25:58.372865    5221 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 02:25:58.865828    5221 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0917 02:25:58.870757    5221 api_server.go:279] https://192.169.0.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 02:25:58.870780    5221 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 02:25:59.366312    5221 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0917 02:25:59.370712    5221 api_server.go:279] https://192.169.0.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 02:25:59.370724    5221 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 02:25:59.865828    5221 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0917 02:25:59.869097    5221 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0917 02:25:59.869160    5221 round_trippers.go:463] GET https://192.169.0.14:8443/version
	I0917 02:25:59.869165    5221 round_trippers.go:469] Request Headers:
	I0917 02:25:59.869172    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:25:59.869177    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:25:59.874624    5221 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 02:25:59.874634    5221 round_trippers.go:577] Response Headers:
	I0917 02:25:59.874639    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:25:59.874643    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:25:59.874645    5221 round_trippers.go:580]     Content-Length: 263
	I0917 02:25:59.874648    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:25:59.874651    5221 round_trippers.go:580]     Audit-Id: 05c5c2ea-2c5b-4bba-a27b-2ae34fbcbd06
	I0917 02:25:59.874654    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:25:59.874656    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:25:59.874673    5221 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0917 02:25:59.874718    5221 api_server.go:141] control plane version: v1.31.1
	I0917 02:25:59.874727    5221 api_server.go:131] duration metric: took 4.009192512s to wait for apiserver health ...
	I0917 02:25:59.874742    5221 cni.go:84] Creating CNI manager for ""
	I0917 02:25:59.874746    5221 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 02:25:59.896381    5221 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0917 02:25:59.916935    5221 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0917 02:25:59.920792    5221 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0917 02:25:59.920804    5221 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0917 02:25:59.920811    5221 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0917 02:25:59.920820    5221 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0917 02:25:59.920828    5221 command_runner.go:130] > Access: 2024-09-17 09:25:20.659884632 +0000
	I0917 02:25:59.920834    5221 command_runner.go:130] > Modify: 2024-09-15 21:28:20.000000000 +0000
	I0917 02:25:59.920839    5221 command_runner.go:130] > Change: 2024-09-17 09:25:19.114884636 +0000
	I0917 02:25:59.920842    5221 command_runner.go:130] >  Birth: -
	I0917 02:25:59.921065    5221 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0917 02:25:59.921074    5221 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0917 02:25:59.935098    5221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 02:26:00.271015    5221 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0917 02:26:00.286389    5221 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0917 02:26:00.395234    5221 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0917 02:26:00.455408    5221 command_runner.go:130] > daemonset.apps/kindnet configured
	I0917 02:26:00.456885    5221 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 02:26:00.456935    5221 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 02:26:00.456945    5221 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 02:26:00.456991    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0917 02:26:00.456996    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.457002    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.457007    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.460212    5221 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:26:00.460221    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.460226    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.460229    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.460232    5221 round_trippers.go:580]     Audit-Id: cec1c0fd-eae1-4561-bea4-e1b4450f66f7
	I0917 02:26:00.460235    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.460237    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.460239    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.461130    5221 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"857"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"836","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 89937 chars]
	I0917 02:26:00.464211    5221 system_pods.go:59] 12 kube-system pods found
	I0917 02:26:00.464227    5221 system_pods.go:61] "coredns-7c65d6cfc9-hr8rd" [c990c87f-921e-45ba-845b-499147aaa1f9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 02:26:00.464234    5221 system_pods.go:61] "etcd-multinode-232000" [023b8525-6267-41df-ab63-f9c82adf3da1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 02:26:00.464238    5221 system_pods.go:61] "kindnet-7djsb" [4b28da1f-ce8e-43a9-bda0-e44de7b6d582] Running
	I0917 02:26:00.464241    5221 system_pods.go:61] "kindnet-bz9gj" [42665fdd-c209-43ac-8852-3fd0517abce4] Running
	I0917 02:26:00.464244    5221 system_pods.go:61] "kindnet-fgvhm" [f8fe7dd6-85d9-447e-88f1-d98d354a0802] Running
	I0917 02:26:00.464248    5221 system_pods.go:61] "kube-apiserver-multinode-232000" [4bc0fa4f-4ca7-478d-8b7c-b59c24a56faa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 02:26:00.464253    5221 system_pods.go:61] "kube-controller-manager-multinode-232000" [788e2a30-fcea-4f4c-afc3-52d73d046e1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 02:26:00.464256    5221 system_pods.go:61] "kube-proxy-8fb4t" [e73b5d46-804f-4a13-a286-f0194436c3fc] Running
	I0917 02:26:00.464260    5221 system_pods.go:61] "kube-proxy-9s8zh" [8516d216-3857-4702-9656-97c8c91337fc] Running
	I0917 02:26:00.464262    5221 system_pods.go:61] "kube-proxy-xlb2z" [66e8dada-5a23-453e-ba6e-a9146d3467e7] Running
	I0917 02:26:00.464266    5221 system_pods.go:61] "kube-scheduler-multinode-232000" [a38a42a2-e0f9-4c6e-aa99-8dae3f326090] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 02:26:00.464269    5221 system_pods.go:61] "storage-provisioner" [878f83a8-de4f-48b8-98ac-2d34171091ae] Running
	I0917 02:26:00.464273    5221 system_pods.go:74] duration metric: took 7.380959ms to wait for pod list to return data ...
	I0917 02:26:00.464279    5221 node_conditions.go:102] verifying NodePressure condition ...
	I0917 02:26:00.464319    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes
	I0917 02:26:00.464324    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.464329    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.464333    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.466448    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:00.466458    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.466465    5221 round_trippers.go:580]     Audit-Id: 10a26a59-7740-4dc4-b164-f2eedcd6348d
	I0917 02:26:00.466468    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.466473    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.466477    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.466480    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.466482    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.466610    5221 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"857"},"items":[{"metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"785","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14803 chars]
	I0917 02:26:00.467117    5221 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:26:00.467129    5221 node_conditions.go:123] node cpu capacity is 2
	I0917 02:26:00.467136    5221 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:26:00.467140    5221 node_conditions.go:123] node cpu capacity is 2
	I0917 02:26:00.467143    5221 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:26:00.467147    5221 node_conditions.go:123] node cpu capacity is 2
	I0917 02:26:00.467150    5221 node_conditions.go:105] duration metric: took 2.867049ms to run NodePressure ...
	I0917 02:26:00.467161    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:26:00.568747    5221 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0917 02:26:00.721004    5221 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0917 02:26:00.722034    5221 command_runner.go:130] ! W0917 09:26:00.657631    2181 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:26:00.722054    5221 command_runner.go:130] ! W0917 09:26:00.658264    2181 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:26:00.722106    5221 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 02:26:00.722160    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0917 02:26:00.722166    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.722172    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.722176    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.724164    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:00.724175    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.724180    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.724182    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.724185    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.724187    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.724207    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.724215    5221 round_trippers.go:580]     Audit-Id: 18efc7ff-7678-4205-b456-90f2887b9eab
	I0917 02:26:00.724775    5221 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"859"},"items":[{"metadata":{"name":"etcd-multinode-232000","namespace":"kube-system","uid":"023b8525-6267-41df-ab63-f9c82adf3da1","resourceVersion":"831","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"04b75db992fd3241846557ea586378aa","kubernetes.io/config.mirror":"04b75db992fd3241846557ea586378aa","kubernetes.io/config.seen":"2024-09-17T09:21:50.527953777Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 31218 chars]
	I0917 02:26:00.725484    5221 kubeadm.go:739] kubelet initialised
	I0917 02:26:00.725493    5221 kubeadm.go:740] duration metric: took 3.377858ms waiting for restarted kubelet to initialise ...
	I0917 02:26:00.725500    5221 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:26:00.725530    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0917 02:26:00.725535    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.725541    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.725544    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.727271    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:00.727277    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.727282    5221 round_trippers.go:580]     Audit-Id: c3d42fe6-d7d2-44ec-ae2b-ee38c04872a2
	I0917 02:26:00.727285    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.727287    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.727289    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.727291    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.727293    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.728152    5221 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"859"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"836","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 89937 chars]
	I0917 02:26:00.730071    5221 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:00.730106    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hr8rd
	I0917 02:26:00.730111    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.730117    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.730119    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.731250    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:00.731257    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.731261    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.731265    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.731268    5221 round_trippers.go:580]     Audit-Id: a78352ed-26c0-4e81-a8bf-669761c79dd5
	I0917 02:26:00.731271    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.731274    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.731277    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.731408    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"836","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0917 02:26:00.731655    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:00.731662    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.731667    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.731670    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.732820    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:00.732828    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.732834    5221 round_trippers.go:580]     Audit-Id: 364120fb-93d4-4739-ab35-b6388d7029de
	I0917 02:26:00.732840    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.732845    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.732849    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.732853    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.732857    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.732992    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"785","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5299 chars]
	I0917 02:26:00.733169    5221 pod_ready.go:98] node "multinode-232000" hosting pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:00.733179    5221 pod_ready.go:82] duration metric: took 3.099005ms for pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace to be "Ready" ...
	E0917 02:26:00.733185    5221 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-232000" hosting pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:00.733190    5221 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:00.733215    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-232000
	I0917 02:26:00.733220    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.733225    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.733230    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.734322    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:00.734330    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.734334    5221 round_trippers.go:580]     Audit-Id: a274e108-7133-432c-a114-30d2b2440538
	I0917 02:26:00.734339    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.734346    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.734351    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.734355    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.734358    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.734505    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-232000","namespace":"kube-system","uid":"023b8525-6267-41df-ab63-f9c82adf3da1","resourceVersion":"831","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"04b75db992fd3241846557ea586378aa","kubernetes.io/config.mirror":"04b75db992fd3241846557ea586378aa","kubernetes.io/config.seen":"2024-09-17T09:21:50.527953777Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6887 chars]
	I0917 02:26:00.734739    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:00.734746    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.734752    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.734755    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.735830    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:00.735839    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.735846    5221 round_trippers.go:580]     Audit-Id: 295e208e-d170-464f-81af-780e49267dd7
	I0917 02:26:00.735851    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.735855    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.735861    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.735868    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.735872    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.736015    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"785","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5299 chars]
	I0917 02:26:00.736177    5221 pod_ready.go:98] node "multinode-232000" hosting pod "etcd-multinode-232000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:00.736185    5221 pod_ready.go:82] duration metric: took 2.991217ms for pod "etcd-multinode-232000" in "kube-system" namespace to be "Ready" ...
	E0917 02:26:00.736191    5221 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-232000" hosting pod "etcd-multinode-232000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:00.736203    5221 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:00.736229    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-232000
	I0917 02:26:00.736233    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.736238    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.736242    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.737488    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:00.737494    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.737499    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.737507    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.737511    5221 round_trippers.go:580]     Audit-Id: 38f7c539-904b-4bc2-ab57-0e7c28997026
	I0917 02:26:00.737513    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.737516    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.737518    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.737712    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-232000","namespace":"kube-system","uid":"4bc0fa4f-4ca7-478d-8b7c-b59c24a56faa","resourceVersion":"830","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.14:8443","kubernetes.io/config.hash":"67a66a13a29b1cc7d88ed81650cbab1c","kubernetes.io/config.mirror":"67a66a13a29b1cc7d88ed81650cbab1c","kubernetes.io/config.seen":"2024-09-17T09:21:50.527954370Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8135 chars]
	I0917 02:26:00.737930    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:00.737937    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.737942    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.737947    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.739093    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:00.739101    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.739107    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.739112    5221 round_trippers.go:580]     Audit-Id: ad26ed6c-c42e-43ba-9d3b-e6f1982265f9
	I0917 02:26:00.739116    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.739120    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.739123    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.739126    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.739313    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"785","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5299 chars]
	I0917 02:26:00.739475    5221 pod_ready.go:98] node "multinode-232000" hosting pod "kube-apiserver-multinode-232000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:00.739483    5221 pod_ready.go:82] duration metric: took 3.275465ms for pod "kube-apiserver-multinode-232000" in "kube-system" namespace to be "Ready" ...
	E0917 02:26:00.739488    5221 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-232000" hosting pod "kube-apiserver-multinode-232000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:00.739493    5221 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:00.739520    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-232000
	I0917 02:26:00.739525    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.739530    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.739534    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.740750    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:00.740757    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.740761    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.740765    5221 round_trippers.go:580]     Audit-Id: 01d33272-6141-4eb7-b512-d6c88b5da131
	I0917 02:26:00.740768    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.740771    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.740773    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.740776    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.740970    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-232000","namespace":"kube-system","uid":"788e2a30-fcea-4f4c-afc3-52d73d046e1d","resourceVersion":"827","creationTimestamp":"2024-09-17T09:21:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70ae9bf162cfffffa860fd43666d4b44","kubernetes.io/config.mirror":"70ae9bf162cfffffa860fd43666d4b44","kubernetes.io/config.seen":"2024-09-17T09:21:55.992286729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7726 chars]
	I0917 02:26:00.859144    5221 request.go:632] Waited for 117.91154ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:00.859210    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:00.859218    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.859226    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.859231    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.863641    5221 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:26:00.863654    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.863659    5221 round_trippers.go:580]     Audit-Id: 77a25608-6114-48c9-9634-131e5aa8ab60
	I0917 02:26:00.863662    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.863665    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.863668    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.863686    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.863689    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:01 GMT
	I0917 02:26:00.863757    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"785","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5299 chars]
	I0917 02:26:00.863948    5221 pod_ready.go:98] node "multinode-232000" hosting pod "kube-controller-manager-multinode-232000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:00.863959    5221 pod_ready.go:82] duration metric: took 124.46061ms for pod "kube-controller-manager-multinode-232000" in "kube-system" namespace to be "Ready" ...
	E0917 02:26:00.863967    5221 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-232000" hosting pod "kube-controller-manager-multinode-232000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:00.863973    5221 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8fb4t" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:01.057657    5221 request.go:632] Waited for 193.636685ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fb4t
	I0917 02:26:01.057703    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fb4t
	I0917 02:26:01.057722    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:01.057735    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:01.057741    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:01.060635    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:01.060648    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:01.060655    5221 round_trippers.go:580]     Audit-Id: bd584152-cae4-4f0a-af03-4be48c6f706d
	I0917 02:26:01.060658    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:01.060663    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:01.060667    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:01.060670    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:01.060674    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:01 GMT
	I0917 02:26:01.060923    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8fb4t","generateName":"kube-proxy-","namespace":"kube-system","uid":"e73b5d46-804f-4a13-a286-f0194436c3fc","resourceVersion":"516","creationTimestamp":"2024-09-17T09:22:43Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fb6f259a-808b-4cb9-a679-3500f264f1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f259a-808b-4cb9-a679-3500f264f1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0917 02:26:01.258483    5221 request.go:632] Waited for 197.196092ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:01.258590    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:01.258600    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:01.258611    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:01.258621    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:01.261558    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:01.261574    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:01.261581    5221 round_trippers.go:580]     Audit-Id: 23ec325f-145e-4144-8462-c939547787a6
	I0917 02:26:01.261586    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:01.261590    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:01.261593    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:01.261616    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:01.261623    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:01 GMT
	I0917 02:26:01.261757    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"b0d6988f-c01e-465b-b2df-6e79ea652296","resourceVersion":"581","creationTimestamp":"2024-09-17T09:22:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_22_44_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3825 chars]
	I0917 02:26:01.261975    5221 pod_ready.go:93] pod "kube-proxy-8fb4t" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:01.261987    5221 pod_ready.go:82] duration metric: took 398.006781ms for pod "kube-proxy-8fb4t" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:01.261996    5221 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9s8zh" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:01.457792    5221 request.go:632] Waited for 195.747165ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9s8zh
	I0917 02:26:01.457871    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9s8zh
	I0917 02:26:01.457883    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:01.457894    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:01.457902    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:01.460228    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:01.460241    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:01.460247    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:01 GMT
	I0917 02:26:01.460252    5221 round_trippers.go:580]     Audit-Id: 36ac8366-ce7c-4dcb-8007-d7a60e2f53c5
	I0917 02:26:01.460255    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:01.460258    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:01.460261    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:01.460265    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:01.460367    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9s8zh","generateName":"kube-proxy-","namespace":"kube-system","uid":"8516d216-3857-4702-9656-97c8c91337fc","resourceVersion":"854","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fb6f259a-808b-4cb9-a679-3500f264f1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f259a-808b-4cb9-a679-3500f264f1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6394 chars]
	I0917 02:26:01.658875    5221 request.go:632] Waited for 198.175704ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:01.658954    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:01.658960    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:01.658966    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:01.658970    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:01.674149    5221 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0917 02:26:01.674162    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:01.674167    5221 round_trippers.go:580]     Audit-Id: 6de142d5-08f6-4911-ab18-e321199850b4
	I0917 02:26:01.674171    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:01.674174    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:01.674182    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:01.674185    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:01.674189    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:01 GMT
	I0917 02:26:01.679058    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"785","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5299 chars]
	I0917 02:26:01.679257    5221 pod_ready.go:98] node "multinode-232000" hosting pod "kube-proxy-9s8zh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:01.679269    5221 pod_ready.go:82] duration metric: took 417.266115ms for pod "kube-proxy-9s8zh" in "kube-system" namespace to be "Ready" ...
	E0917 02:26:01.679276    5221 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-232000" hosting pod "kube-proxy-9s8zh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:01.679283    5221 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xlb2z" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:01.857364    5221 request.go:632] Waited for 178.043065ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xlb2z
	I0917 02:26:01.857410    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xlb2z
	I0917 02:26:01.857418    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:01.857444    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:01.857450    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:01.859399    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:01.859409    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:01.859414    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:01.859417    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:02 GMT
	I0917 02:26:01.859420    5221 round_trippers.go:580]     Audit-Id: 7ab85c97-7330-473f-9b12-88f73918958c
	I0917 02:26:01.859422    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:01.859426    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:01.859429    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:01.859755    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xlb2z","generateName":"kube-proxy-","namespace":"kube-system","uid":"66e8dada-5a23-453e-ba6e-a9146d3467e7","resourceVersion":"742","creationTimestamp":"2024-09-17T09:23:37Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fb6f259a-808b-4cb9-a679-3500f264f1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:23:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f259a-808b-4cb9-a679-3500f264f1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0917 02:26:02.057154    5221 request.go:632] Waited for 197.135315ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m03
	I0917 02:26:02.057211    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m03
	I0917 02:26:02.057217    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:02.057223    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:02.057227    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:02.062226    5221 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:26:02.062239    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:02.062244    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:02 GMT
	I0917 02:26:02.062247    5221 round_trippers.go:580]     Audit-Id: ed528017-c96f-4b94-af17-c6026481838a
	I0917 02:26:02.062250    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:02.062258    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:02.062262    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:02.062264    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:02.062682    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m03","uid":"ca6d8a0b-78e8-401d-8fd0-21af7b79983d","resourceVersion":"768","creationTimestamp":"2024-09-17T09:24:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_24_31_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:24:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3642 chars]
	I0917 02:26:02.062850    5221 pod_ready.go:93] pod "kube-proxy-xlb2z" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:02.062860    5221 pod_ready.go:82] duration metric: took 383.57032ms for pod "kube-proxy-xlb2z" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:02.062867    5221 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:02.257084    5221 request.go:632] Waited for 194.171971ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-232000
	I0917 02:26:02.257181    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-232000
	I0917 02:26:02.257193    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:02.257205    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:02.257213    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:02.259698    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:02.259711    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:02.259718    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:02 GMT
	I0917 02:26:02.259722    5221 round_trippers.go:580]     Audit-Id: e675a878-7f95-42c4-8341-65277fb467ce
	I0917 02:26:02.259726    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:02.259730    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:02.259733    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:02.259737    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:02.260062    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-232000","namespace":"kube-system","uid":"a38a42a2-e0f9-4c6e-aa99-8dae3f326090","resourceVersion":"828","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6ab4348c7fba41d6fa49c901ef5e8acf","kubernetes.io/config.mirror":"6ab4348c7fba41d6fa49c901ef5e8acf","kubernetes.io/config.seen":"2024-09-17T09:21:50.527953069Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5438 chars]
	I0917 02:26:02.457926    5221 request.go:632] Waited for 197.630012ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:02.458000    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:02.458007    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:02.458025    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:02.458030    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:02.460577    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:02.460587    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:02.460593    5221 round_trippers.go:580]     Audit-Id: e88ee0d3-2228-484e-b323-73d11157d0ad
	I0917 02:26:02.460624    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:02.460630    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:02.460632    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:02.460634    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:02.460637    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:02 GMT
	I0917 02:26:02.460715    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:02.460912    5221 pod_ready.go:98] node "multinode-232000" hosting pod "kube-scheduler-multinode-232000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:02.460923    5221 pod_ready.go:82] duration metric: took 398.05006ms for pod "kube-scheduler-multinode-232000" in "kube-system" namespace to be "Ready" ...
	E0917 02:26:02.460930    5221 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-232000" hosting pod "kube-scheduler-multinode-232000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:02.460937    5221 pod_ready.go:39] duration metric: took 1.735423244s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:26:02.460951    5221 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 02:26:02.472005    5221 command_runner.go:130] > -16
	I0917 02:26:02.472030    5221 ops.go:34] apiserver oom_adj: -16
	I0917 02:26:02.472035    5221 kubeadm.go:597] duration metric: took 9.172389361s to restartPrimaryControlPlane
	I0917 02:26:02.472040    5221 kubeadm.go:394] duration metric: took 9.193145452s to StartCluster
	I0917 02:26:02.472051    5221 settings.go:142] acquiring lock: {Name:mk5de70c670a23cf49fdbd19ddb5c1d2a9de9a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:26:02.472140    5221 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:26:02.472472    5221 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:26:02.472755    5221 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:26:02.472772    5221 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 02:26:02.472880    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:26:02.530468    5221 out.go:177] * Verifying Kubernetes components...
	I0917 02:26:02.572271    5221 out.go:177] * Enabled addons: 
	I0917 02:26:02.593371    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:26:02.613940    5221 addons.go:510] duration metric: took 141.175129ms for enable addons: enabled=[]
	I0917 02:26:02.733145    5221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:26:02.746052    5221 node_ready.go:35] waiting up to 6m0s for node "multinode-232000" to be "Ready" ...
	I0917 02:26:02.746107    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:02.746112    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:02.746118    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:02.746121    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:02.747828    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:02.747837    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:02.747842    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:02.747845    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:02.747847    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:02.747850    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:02.747855    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:02 GMT
	I0917 02:26:02.747858    5221 round_trippers.go:580]     Audit-Id: d5496d68-1a64-404a-8d6c-9b7cc23eab7d
	I0917 02:26:02.748129    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:03.246400    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:03.246425    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:03.246436    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:03.246443    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:03.248831    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:03.248843    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:03.248849    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:03.248853    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:03 GMT
	I0917 02:26:03.248872    5221 round_trippers.go:580]     Audit-Id: d5465bc4-df8d-4c46-ae7c-3c5669cf489d
	I0917 02:26:03.248883    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:03.248892    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:03.248899    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:03.249260    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:03.746905    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:03.746926    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:03.746937    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:03.746943    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:03.749584    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:03.749599    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:03.749606    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:03.749610    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:03.749614    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:03.749618    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:03 GMT
	I0917 02:26:03.749623    5221 round_trippers.go:580]     Audit-Id: 0de3ccd8-6107-4820-bf38-1d95edc7f688
	I0917 02:26:03.749629    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:03.749737    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:04.247213    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:04.247255    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:04.247266    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:04.247271    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:04.249600    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:04.249612    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:04.249619    5221 round_trippers.go:580]     Audit-Id: 77757cd5-1fdb-49a4-af4b-f47aecf7626b
	I0917 02:26:04.249622    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:04.249625    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:04.249629    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:04.249631    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:04.249634    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:04 GMT
	I0917 02:26:04.249777    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:04.746521    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:04.746548    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:04.746559    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:04.746565    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:04.749018    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:04.749033    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:04.749040    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:04.749047    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:04 GMT
	I0917 02:26:04.749051    5221 round_trippers.go:580]     Audit-Id: 6310c865-04f8-461f-b9f3-df4feda0be92
	I0917 02:26:04.749057    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:04.749061    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:04.749064    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:04.749233    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:04.749483    5221 node_ready.go:53] node "multinode-232000" has status "Ready":"False"
	I0917 02:26:05.247645    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:05.247665    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:05.247676    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:05.247686    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:05.250631    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:05.250645    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:05.250652    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:05.250658    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:05 GMT
	I0917 02:26:05.250665    5221 round_trippers.go:580]     Audit-Id: bf84e072-c3a4-434c-8a99-ca902a8cd4fa
	I0917 02:26:05.250671    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:05.250675    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:05.250681    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:05.251112    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:05.746715    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:05.746739    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:05.746750    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:05.746758    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:05.749253    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:05.749274    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:05.749285    5221 round_trippers.go:580]     Audit-Id: 031b803f-ecc3-4ecc-a94a-2f1be6a17281
	I0917 02:26:05.749295    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:05.749301    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:05.749306    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:05.749314    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:05.749318    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:05 GMT
	I0917 02:26:05.749510    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:06.248253    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:06.248270    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:06.248278    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:06.248282    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:06.249998    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:06.250006    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:06.250011    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:06.250014    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:06 GMT
	I0917 02:26:06.250016    5221 round_trippers.go:580]     Audit-Id: 5162bda8-72e3-4622-af16-9d414585fc88
	I0917 02:26:06.250019    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:06.250022    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:06.250024    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:06.250155    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:06.748270    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:06.748295    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:06.748335    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:06.748342    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:06.751307    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:06.751323    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:06.751331    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:06.751334    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:06.751339    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:06 GMT
	I0917 02:26:06.751344    5221 round_trippers.go:580]     Audit-Id: 1e3a75d5-822f-4917-9c8c-59e939389560
	I0917 02:26:06.751348    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:06.751351    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:06.751441    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:06.751700    5221 node_ready.go:53] node "multinode-232000" has status "Ready":"False"
	I0917 02:26:07.247009    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:07.247024    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:07.247032    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:07.247035    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:07.248816    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:07.248829    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:07.248836    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:07.248840    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:07.248843    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:07 GMT
	I0917 02:26:07.248846    5221 round_trippers.go:580]     Audit-Id: 11fd5706-ef83-43d5-bad1-04e231326bf5
	I0917 02:26:07.248849    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:07.248853    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:07.249074    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:07.746752    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:07.746777    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:07.746789    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:07.746794    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:07.749614    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:07.749631    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:07.749638    5221 round_trippers.go:580]     Audit-Id: 358e443a-a491-4fa7-a31f-e9a538bc2208
	I0917 02:26:07.749644    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:07.749648    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:07.749654    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:07.749659    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:07.749663    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:07 GMT
	I0917 02:26:07.749878    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:08.247780    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:08.247803    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:08.247812    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:08.247820    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:08.250152    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:08.250165    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:08.250171    5221 round_trippers.go:580]     Audit-Id: 6f74250f-c5b6-4af8-89db-d7aecbd6a52f
	I0917 02:26:08.250176    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:08.250179    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:08.250183    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:08.250201    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:08.250206    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:08 GMT
	I0917 02:26:08.250582    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:08.748333    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:08.748363    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:08.748375    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:08.748380    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:08.751003    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:08.751019    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:08.751026    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:08.751030    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:08 GMT
	I0917 02:26:08.751033    5221 round_trippers.go:580]     Audit-Id: 942c4a5b-058d-4147-b553-8bfe7917e2d1
	I0917 02:26:08.751037    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:08.751042    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:08.751045    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:08.751315    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:09.247719    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:09.247742    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:09.247755    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:09.247761    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:09.250302    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:09.250352    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:09.250362    5221 round_trippers.go:580]     Audit-Id: d9fb6f04-f825-4b7a-9c00-4563ab889066
	I0917 02:26:09.250366    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:09.250369    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:09.250373    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:09.250378    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:09.250384    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:09 GMT
	I0917 02:26:09.250531    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:09.250783    5221 node_ready.go:53] node "multinode-232000" has status "Ready":"False"
	I0917 02:26:09.748371    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:09.748395    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:09.748409    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:09.748418    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:09.751179    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:09.751193    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:09.751200    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:09.751203    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:09 GMT
	I0917 02:26:09.751208    5221 round_trippers.go:580]     Audit-Id: 02426122-cb5d-4703-8836-c7e9379e2552
	I0917 02:26:09.751212    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:09.751237    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:09.751243    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:09.751309    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:10.247062    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:10.247089    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:10.247102    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:10.247107    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:10.249677    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:10.249690    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:10.249695    5221 round_trippers.go:580]     Audit-Id: d774fdc1-907c-4c0f-93f2-4826af2c3275
	I0917 02:26:10.249700    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:10.249704    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:10.249707    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:10.249712    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:10.249715    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:10 GMT
	I0917 02:26:10.249793    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:10.747093    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:10.747117    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:10.747129    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:10.747134    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:10.749952    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:10.749966    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:10.749974    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:10 GMT
	I0917 02:26:10.749979    5221 round_trippers.go:580]     Audit-Id: 4ed5f44c-a575-41f3-ad51-29a6b033237a
	I0917 02:26:10.749982    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:10.749984    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:10.749987    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:10.749991    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:10.750065    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:11.247210    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:11.247229    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:11.247239    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:11.247243    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:11.251635    5221 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:26:11.251650    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:11.251655    5221 round_trippers.go:580]     Audit-Id: c47fce14-9a86-4b1c-bd1c-c537454756cf
	I0917 02:26:11.251658    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:11.251660    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:11.251662    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:11.251665    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:11.251667    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:11 GMT
	I0917 02:26:11.251735    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:11.251930    5221 node_ready.go:53] node "multinode-232000" has status "Ready":"False"
	I0917 02:26:11.746646    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:11.746671    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:11.746683    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:11.746690    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:11.749441    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:11.749453    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:11.749458    5221 round_trippers.go:580]     Audit-Id: 00ce8c37-8d96-44ed-9750-000be77865e3
	I0917 02:26:11.749461    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:11.749464    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:11.749467    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:11.749470    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:11.749472    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:11 GMT
	I0917 02:26:11.749559    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:12.247113    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:12.247126    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:12.247133    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:12.247137    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:12.250221    5221 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:26:12.250233    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:12.250253    5221 round_trippers.go:580]     Audit-Id: 1ed3c433-146a-4820-afa8-cf9eb4213a58
	I0917 02:26:12.250258    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:12.250261    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:12.250264    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:12.250268    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:12.250275    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:12 GMT
	I0917 02:26:12.250407    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:12.747129    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:12.747153    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:12.747165    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:12.747171    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:12.749967    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:12.749994    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:12.750002    5221 round_trippers.go:580]     Audit-Id: 877c75f0-fef6-492f-a4e9-0149d136f58b
	I0917 02:26:12.750008    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:12.750011    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:12.750014    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:12.750017    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:12.750020    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:12 GMT
	I0917 02:26:12.750172    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:12.750420    5221 node_ready.go:49] node "multinode-232000" has status "Ready":"True"
	I0917 02:26:12.750436    5221 node_ready.go:38] duration metric: took 10.00431778s for node "multinode-232000" to be "Ready" ...
	I0917 02:26:12.750444    5221 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:26:12.750490    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0917 02:26:12.750497    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:12.750504    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:12.750509    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:12.753310    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:12.753331    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:12.753344    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:12.753350    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:12.753357    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:12 GMT
	I0917 02:26:12.753360    5221 round_trippers.go:580]     Audit-Id: 9e476eb0-3edc-440e-8a01-9f661f7aa4f5
	I0917 02:26:12.753367    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:12.753371    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:12.754053    5221 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"912"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"836","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 89225 chars]
	I0917 02:26:12.755941    5221 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:12.755980    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hr8rd
	I0917 02:26:12.755985    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:12.755991    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:12.755995    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:12.757082    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:12.757091    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:12.757099    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:12 GMT
	I0917 02:26:12.757104    5221 round_trippers.go:580]     Audit-Id: 51e0aced-52b1-45a1-8302-f1ebe70f0df4
	I0917 02:26:12.757107    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:12.757110    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:12.757114    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:12.757117    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:12.757234    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"836","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0917 02:26:12.757489    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:12.757496    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:12.757501    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:12.757504    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:12.758440    5221 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 02:26:12.758449    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:12.758454    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:12 GMT
	I0917 02:26:12.758457    5221 round_trippers.go:580]     Audit-Id: 8a7645d5-3390-4ab1-9a1d-0fafe92e8c98
	I0917 02:26:12.758460    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:12.758463    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:12.758466    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:12.758468    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:12.758628    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:13.257052    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hr8rd
	I0917 02:26:13.257080    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:13.257092    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:13.257097    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:13.259589    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:13.259602    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:13.259609    5221 round_trippers.go:580]     Audit-Id: 9cf005fe-53ba-4797-860e-75eaa0353a49
	I0917 02:26:13.259613    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:13.259617    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:13.259621    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:13.259624    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:13.259627    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:13 GMT
	I0917 02:26:13.259790    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"836","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0917 02:26:13.260168    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:13.260178    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:13.260185    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:13.260190    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:13.261811    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:13.261819    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:13.261824    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:13.261827    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:13.261831    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:13.261834    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:13 GMT
	I0917 02:26:13.261836    5221 round_trippers.go:580]     Audit-Id: 6765ccb0-a0e2-4d7e-9d32-1b342f197114
	I0917 02:26:13.261840    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:13.261894    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:13.756286    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hr8rd
	I0917 02:26:13.756338    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:13.756351    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:13.756357    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:13.758956    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:13.758971    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:13.758980    5221 round_trippers.go:580]     Audit-Id: 52871bc1-08f7-4ce3-8e68-c20630b80256
	I0917 02:26:13.758984    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:13.758988    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:13.758994    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:13.758997    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:13.759000    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:13 GMT
	I0917 02:26:13.759205    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"836","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0917 02:26:13.759582    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:13.759592    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:13.759600    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:13.759604    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:13.760956    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:13.760963    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:13.760968    5221 round_trippers.go:580]     Audit-Id: 92042065-ee16-4db0-bef2-6b45ad14a8c6
	I0917 02:26:13.760971    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:13.760974    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:13.760978    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:13.760984    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:13.760987    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:13 GMT
	I0917 02:26:13.761154    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:14.256161    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hr8rd
	I0917 02:26:14.256204    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:14.256211    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:14.256216    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:14.257917    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:14.257932    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:14.257940    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:14.257945    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:14.257951    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:14.257955    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:14 GMT
	I0917 02:26:14.257962    5221 round_trippers.go:580]     Audit-Id: f289b32a-0f9c-4903-b0d5-985a11f8c99d
	I0917 02:26:14.257976    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:14.258044    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"836","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0917 02:26:14.258324    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:14.258332    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:14.258337    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:14.258340    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:14.259811    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:14.259821    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:14.259829    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:14.259833    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:14.259836    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:14.259839    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:14 GMT
	I0917 02:26:14.259844    5221 round_trippers.go:580]     Audit-Id: 8b5bd704-6a01-49ac-8aec-021470923829
	I0917 02:26:14.259846    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:14.260106    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:14.756244    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hr8rd
	I0917 02:26:14.756260    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:14.756267    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:14.756270    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:14.758565    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:14.758576    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:14.758581    5221 round_trippers.go:580]     Audit-Id: 9f168eff-c34d-4067-9d03-893685573ae1
	I0917 02:26:14.758586    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:14.758589    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:14.758592    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:14.758594    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:14.758597    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:14 GMT
	I0917 02:26:14.758648    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"836","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0917 02:26:14.758929    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:14.758936    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:14.758941    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:14.758945    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:14.760978    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:14.760986    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:14.760992    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:14 GMT
	I0917 02:26:14.760995    5221 round_trippers.go:580]     Audit-Id: 0bd0bc85-4f39-4866-9d95-ea6740130808
	I0917 02:26:14.760999    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:14.761004    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:14.761008    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:14.761012    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:14.761299    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:14.761478    5221 pod_ready.go:103] pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace has status "Ready":"False"
	I0917 02:26:15.256937    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hr8rd
	I0917 02:26:15.256978    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.257003    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.257007    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.258811    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.258821    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.258831    5221 round_trippers.go:580]     Audit-Id: 30550394-f485-48b0-8ebf-2f4638f3cea2
	I0917 02:26:15.258836    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.258841    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.258844    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.258847    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.258849    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.259083    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"926","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7039 chars]
	I0917 02:26:15.259362    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:15.259369    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.259375    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.259380    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.260500    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.260510    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.260515    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.260518    5221 round_trippers.go:580]     Audit-Id: 65b980fb-bf64-4c00-8aef-e0807c6a7f9a
	I0917 02:26:15.260523    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.260530    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.260534    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.260537    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.260857    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:15.261030    5221 pod_ready.go:93] pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:15.261041    5221 pod_ready.go:82] duration metric: took 2.505078533s for pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.261047    5221 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.261077    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-232000
	I0917 02:26:15.261081    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.261087    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.261091    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.262137    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.262145    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.262150    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.262154    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.262157    5221 round_trippers.go:580]     Audit-Id: 0ae9c7df-0193-4496-a3d2-6560286b49de
	I0917 02:26:15.262160    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.262165    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.262170    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.262380    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-232000","namespace":"kube-system","uid":"023b8525-6267-41df-ab63-f9c82adf3da1","resourceVersion":"895","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"04b75db992fd3241846557ea586378aa","kubernetes.io/config.mirror":"04b75db992fd3241846557ea586378aa","kubernetes.io/config.seen":"2024-09-17T09:21:50.527953777Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6663 chars]
	I0917 02:26:15.262637    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:15.262643    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.262648    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.262650    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.263744    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.263751    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.263756    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.263774    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.263801    5221 round_trippers.go:580]     Audit-Id: 3927b9dc-726b-4707-ab34-f0009b4d0af8
	I0917 02:26:15.263818    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.263824    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.263827    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.263933    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:15.264109    5221 pod_ready.go:93] pod "etcd-multinode-232000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:15.264117    5221 pod_ready.go:82] duration metric: took 3.06514ms for pod "etcd-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.264127    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.264161    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-232000
	I0917 02:26:15.264169    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.264175    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.264179    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.265288    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.265294    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.265298    5221 round_trippers.go:580]     Audit-Id: ec52314f-26b5-4c04-bdfe-3b0687b140f0
	I0917 02:26:15.265302    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.265304    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.265306    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.265315    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.265319    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.265747    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-232000","namespace":"kube-system","uid":"4bc0fa4f-4ca7-478d-8b7c-b59c24a56faa","resourceVersion":"899","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.14:8443","kubernetes.io/config.hash":"67a66a13a29b1cc7d88ed81650cbab1c","kubernetes.io/config.mirror":"67a66a13a29b1cc7d88ed81650cbab1c","kubernetes.io/config.seen":"2024-09-17T09:21:50.527954370Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0917 02:26:15.265971    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:15.265978    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.265983    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.265987    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.267067    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.267075    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.267080    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.267084    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.267088    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.267092    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.267097    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.267100    5221 round_trippers.go:580]     Audit-Id: 4d66ba77-7146-4f94-aa26-743d95b6c06e
	I0917 02:26:15.267327    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:15.267493    5221 pod_ready.go:93] pod "kube-apiserver-multinode-232000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:15.267501    5221 pod_ready.go:82] duration metric: took 3.369149ms for pod "kube-apiserver-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.267507    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.267534    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-232000
	I0917 02:26:15.267538    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.267544    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.267549    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.268681    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.268695    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.268700    5221 round_trippers.go:580]     Audit-Id: fea38617-f182-401b-84f4-164f6524b857
	I0917 02:26:15.268703    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.268706    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.268709    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.268712    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.268715    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.268992    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-232000","namespace":"kube-system","uid":"788e2a30-fcea-4f4c-afc3-52d73d046e1d","resourceVersion":"914","creationTimestamp":"2024-09-17T09:21:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70ae9bf162cfffffa860fd43666d4b44","kubernetes.io/config.mirror":"70ae9bf162cfffffa860fd43666d4b44","kubernetes.io/config.seen":"2024-09-17T09:21:55.992286729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0917 02:26:15.269224    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:15.269232    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.269238    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.269243    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.270433    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.270441    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.270446    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.270449    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.270452    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.270455    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.270458    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.270466    5221 round_trippers.go:580]     Audit-Id: 59387e12-6058-46f1-855d-444750a41c7a
	I0917 02:26:15.271323    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:15.271536    5221 pod_ready.go:93] pod "kube-controller-manager-multinode-232000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:15.271544    5221 pod_ready.go:82] duration metric: took 4.032939ms for pod "kube-controller-manager-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.271557    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8fb4t" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.271591    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fb4t
	I0917 02:26:15.271596    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.271602    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.271605    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.272726    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.272734    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.272741    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.272745    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.272749    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.272760    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.272763    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.272765    5221 round_trippers.go:580]     Audit-Id: fc25355a-be45-4ca4-951c-7d819f14f6a4
	I0917 02:26:15.273037    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8fb4t","generateName":"kube-proxy-","namespace":"kube-system","uid":"e73b5d46-804f-4a13-a286-f0194436c3fc","resourceVersion":"516","creationTimestamp":"2024-09-17T09:22:43Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fb6f259a-808b-4cb9-a679-3500f264f1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f259a-808b-4cb9-a679-3500f264f1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0917 02:26:15.273266    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:15.273273    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.273279    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.273282    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.274345    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.274352    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.274356    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.274360    5221 round_trippers.go:580]     Audit-Id: 2eacf506-e644-4690-a2bd-26023f8af311
	I0917 02:26:15.274364    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.274366    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.274369    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.274371    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.274601    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"b0d6988f-c01e-465b-b2df-6e79ea652296","resourceVersion":"581","creationTimestamp":"2024-09-17T09:22:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_22_44_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3825 chars]
	I0917 02:26:15.274749    5221 pod_ready.go:93] pod "kube-proxy-8fb4t" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:15.274756    5221 pod_ready.go:82] duration metric: took 3.194435ms for pod "kube-proxy-8fb4t" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.274762    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9s8zh" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.457062    5221 request.go:632] Waited for 182.2569ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9s8zh
	I0917 02:26:15.457120    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9s8zh
	I0917 02:26:15.457129    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.457135    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.457139    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.459245    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:15.459258    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.459263    5221 round_trippers.go:580]     Audit-Id: ec6babfe-e5dc-4651-aa2b-8f5967f72bb9
	I0917 02:26:15.459266    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.459269    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.459272    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.459303    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.459308    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.459630    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9s8zh","generateName":"kube-proxy-","namespace":"kube-system","uid":"8516d216-3857-4702-9656-97c8c91337fc","resourceVersion":"890","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fb6f259a-808b-4cb9-a679-3500f264f1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f259a-808b-4cb9-a679-3500f264f1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6394 chars]
	I0917 02:26:15.659038    5221 request.go:632] Waited for 199.111398ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:15.659171    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:15.659181    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.659192    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.659201    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.662024    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:15.662041    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.662049    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.662053    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.662058    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.662062    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.662066    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.662070    5221 round_trippers.go:580]     Audit-Id: 704458b4-8175-4324-abed-2c9fda237785
	I0917 02:26:15.662140    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:15.662396    5221 pod_ready.go:93] pod "kube-proxy-9s8zh" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:15.662410    5221 pod_ready.go:82] duration metric: took 387.641161ms for pod "kube-proxy-9s8zh" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.662418    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xlb2z" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.858007    5221 request.go:632] Waited for 195.547162ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xlb2z
	I0917 02:26:15.858063    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xlb2z
	I0917 02:26:15.858069    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.858078    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.858082    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.859988    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.859998    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.860006    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.860013    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.860022    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:16 GMT
	I0917 02:26:15.860027    5221 round_trippers.go:580]     Audit-Id: 782f28e5-b7a4-41cc-933f-3db4b4f7cb50
	I0917 02:26:15.860031    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.860034    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.860278    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xlb2z","generateName":"kube-proxy-","namespace":"kube-system","uid":"66e8dada-5a23-453e-ba6e-a9146d3467e7","resourceVersion":"742","creationTimestamp":"2024-09-17T09:23:37Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fb6f259a-808b-4cb9-a679-3500f264f1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:23:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f259a-808b-4cb9-a679-3500f264f1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0917 02:26:16.057101    5221 request.go:632] Waited for 196.48733ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m03
	I0917 02:26:16.057165    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m03
	I0917 02:26:16.057172    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:16.057178    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:16.057182    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:16.058828    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:16.058839    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:16.058845    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:16.058847    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:16.058850    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:16.058853    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:16.058856    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:16 GMT
	I0917 02:26:16.058859    5221 round_trippers.go:580]     Audit-Id: d6531923-6836-4eca-a29c-5f6fab3b1917
	I0917 02:26:16.059029    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m03","uid":"ca6d8a0b-78e8-401d-8fd0-21af7b79983d","resourceVersion":"768","creationTimestamp":"2024-09-17T09:24:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_24_31_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:24:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3642 chars]
	I0917 02:26:16.059200    5221 pod_ready.go:93] pod "kube-proxy-xlb2z" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:16.059209    5221 pod_ready.go:82] duration metric: took 396.78299ms for pod "kube-proxy-xlb2z" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:16.059216    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:16.257918    5221 request.go:632] Waited for 198.659871ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-232000
	I0917 02:26:16.257982    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-232000
	I0917 02:26:16.257992    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:16.258001    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:16.258008    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:16.260492    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:16.260506    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:16.260514    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:16.260519    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:16.260522    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:16.260525    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:16 GMT
	I0917 02:26:16.260529    5221 round_trippers.go:580]     Audit-Id: 1cae5bad-3ef4-4f3e-a912-fe3e3e367819
	I0917 02:26:16.260532    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:16.260613    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-232000","namespace":"kube-system","uid":"a38a42a2-e0f9-4c6e-aa99-8dae3f326090","resourceVersion":"910","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6ab4348c7fba41d6fa49c901ef5e8acf","kubernetes.io/config.mirror":"6ab4348c7fba41d6fa49c901ef5e8acf","kubernetes.io/config.seen":"2024-09-17T09:21:50.527953069Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0917 02:26:16.458572    5221 request.go:632] Waited for 197.660697ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:16.458695    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:16.458708    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:16.458719    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:16.458728    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:16.461469    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:16.461484    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:16.461491    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:16.461497    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:16 GMT
	I0917 02:26:16.461521    5221 round_trippers.go:580]     Audit-Id: b6077a0b-d15d-401b-bcd3-5590a868f232
	I0917 02:26:16.461538    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:16.461545    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:16.461551    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:16.461661    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:16.461946    5221 pod_ready.go:93] pod "kube-scheduler-multinode-232000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:16.461957    5221 pod_ready.go:82] duration metric: took 402.733744ms for pod "kube-scheduler-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:16.461966    5221 pod_ready.go:39] duration metric: took 3.711495893s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:26:16.461980    5221 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:26:16.462054    5221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:26:16.475377    5221 command_runner.go:130] > 1651
	I0917 02:26:16.475657    5221 api_server.go:72] duration metric: took 14.002823163s to wait for apiserver process to appear ...
	I0917 02:26:16.475666    5221 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:26:16.475676    5221 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0917 02:26:16.479861    5221 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0917 02:26:16.479903    5221 round_trippers.go:463] GET https://192.169.0.14:8443/version
	I0917 02:26:16.479909    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:16.479914    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:16.479919    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:16.480431    5221 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 02:26:16.480440    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:16.480446    5221 round_trippers.go:580]     Content-Length: 263
	I0917 02:26:16.480450    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:16 GMT
	I0917 02:26:16.480455    5221 round_trippers.go:580]     Audit-Id: 610915a8-772d-405f-9fa4-0d73b790f14d
	I0917 02:26:16.480458    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:16.480461    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:16.480464    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:16.480467    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:16.480482    5221 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0917 02:26:16.480503    5221 api_server.go:141] control plane version: v1.31.1
	I0917 02:26:16.480511    5221 api_server.go:131] duration metric: took 4.840817ms to wait for apiserver health ...
	I0917 02:26:16.480515    5221 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 02:26:16.657815    5221 request.go:632] Waited for 177.21952ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0917 02:26:16.657870    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0917 02:26:16.657880    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:16.657893    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:16.657905    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:16.661856    5221 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:26:16.661877    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:16.661885    5221 round_trippers.go:580]     Audit-Id: d4a82d5e-3b36-420f-839b-4141c8b30993
	I0917 02:26:16.661890    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:16.661895    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:16.661898    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:16.661902    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:16.661905    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:16 GMT
	I0917 02:26:16.663033    5221 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"933"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"926","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 88910 chars]
	I0917 02:26:16.665172    5221 system_pods.go:59] 12 kube-system pods found
	I0917 02:26:16.665196    5221 system_pods.go:61] "coredns-7c65d6cfc9-hr8rd" [c990c87f-921e-45ba-845b-499147aaa1f9] Running
	I0917 02:26:16.665219    5221 system_pods.go:61] "etcd-multinode-232000" [023b8525-6267-41df-ab63-f9c82adf3da1] Running
	I0917 02:26:16.665223    5221 system_pods.go:61] "kindnet-7djsb" [4b28da1f-ce8e-43a9-bda0-e44de7b6d582] Running
	I0917 02:26:16.665226    5221 system_pods.go:61] "kindnet-bz9gj" [42665fdd-c209-43ac-8852-3fd0517abce4] Running
	I0917 02:26:16.665229    5221 system_pods.go:61] "kindnet-fgvhm" [f8fe7dd6-85d9-447e-88f1-d98d354a0802] Running
	I0917 02:26:16.665232    5221 system_pods.go:61] "kube-apiserver-multinode-232000" [4bc0fa4f-4ca7-478d-8b7c-b59c24a56faa] Running
	I0917 02:26:16.665254    5221 system_pods.go:61] "kube-controller-manager-multinode-232000" [788e2a30-fcea-4f4c-afc3-52d73d046e1d] Running
	I0917 02:26:16.665257    5221 system_pods.go:61] "kube-proxy-8fb4t" [e73b5d46-804f-4a13-a286-f0194436c3fc] Running
	I0917 02:26:16.665259    5221 system_pods.go:61] "kube-proxy-9s8zh" [8516d216-3857-4702-9656-97c8c91337fc] Running
	I0917 02:26:16.665261    5221 system_pods.go:61] "kube-proxy-xlb2z" [66e8dada-5a23-453e-ba6e-a9146d3467e7] Running
	I0917 02:26:16.665278    5221 system_pods.go:61] "kube-scheduler-multinode-232000" [a38a42a2-e0f9-4c6e-aa99-8dae3f326090] Running
	I0917 02:26:16.665281    5221 system_pods.go:61] "storage-provisioner" [878f83a8-de4f-48b8-98ac-2d34171091ae] Running
	I0917 02:26:16.665284    5221 system_pods.go:74] duration metric: took 184.765005ms to wait for pod list to return data ...
	I0917 02:26:16.665290    5221 default_sa.go:34] waiting for default service account to be created ...
	I0917 02:26:16.857156    5221 request.go:632] Waited for 191.815982ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:26:16.857190    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:26:16.857194    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:16.857202    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:16.857228    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:16.859564    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:16.859574    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:16.859579    5221 round_trippers.go:580]     Audit-Id: 4c357a3b-ca0b-419a-a053-564ae9323865
	I0917 02:26:16.859595    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:16.859599    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:16.859601    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:16.859604    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:16.859606    5221 round_trippers.go:580]     Content-Length: 261
	I0917 02:26:16.859609    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:17 GMT
	I0917 02:26:16.859620    5221 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"933"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"76b391ab-85a5-440a-b857-8ab86887edea","resourceVersion":"366","creationTimestamp":"2024-09-17T09:22:01Z"}}]}
	I0917 02:26:16.859742    5221 default_sa.go:45] found service account: "default"
	I0917 02:26:16.859751    5221 default_sa.go:55] duration metric: took 194.455804ms for default service account to be created ...
	I0917 02:26:16.859756    5221 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 02:26:17.058197    5221 request.go:632] Waited for 198.397573ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0917 02:26:17.058307    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0917 02:26:17.058318    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:17.058330    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:17.058337    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:17.062328    5221 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:26:17.062340    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:17.062346    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:17.062349    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:17.062352    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:17.062355    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:17.062357    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:17 GMT
	I0917 02:26:17.062359    5221 round_trippers.go:580]     Audit-Id: 230385cc-abfc-4227-afab-112d2468a42d
	I0917 02:26:17.063236    5221 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"937"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"926","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 88910 chars]
	I0917 02:26:17.065166    5221 system_pods.go:86] 12 kube-system pods found
	I0917 02:26:17.065177    5221 system_pods.go:89] "coredns-7c65d6cfc9-hr8rd" [c990c87f-921e-45ba-845b-499147aaa1f9] Running
	I0917 02:26:17.065181    5221 system_pods.go:89] "etcd-multinode-232000" [023b8525-6267-41df-ab63-f9c82adf3da1] Running
	I0917 02:26:17.065184    5221 system_pods.go:89] "kindnet-7djsb" [4b28da1f-ce8e-43a9-bda0-e44de7b6d582] Running
	I0917 02:26:17.065192    5221 system_pods.go:89] "kindnet-bz9gj" [42665fdd-c209-43ac-8852-3fd0517abce4] Running
	I0917 02:26:17.065196    5221 system_pods.go:89] "kindnet-fgvhm" [f8fe7dd6-85d9-447e-88f1-d98d354a0802] Running
	I0917 02:26:17.065199    5221 system_pods.go:89] "kube-apiserver-multinode-232000" [4bc0fa4f-4ca7-478d-8b7c-b59c24a56faa] Running
	I0917 02:26:17.065202    5221 system_pods.go:89] "kube-controller-manager-multinode-232000" [788e2a30-fcea-4f4c-afc3-52d73d046e1d] Running
	I0917 02:26:17.065205    5221 system_pods.go:89] "kube-proxy-8fb4t" [e73b5d46-804f-4a13-a286-f0194436c3fc] Running
	I0917 02:26:17.065208    5221 system_pods.go:89] "kube-proxy-9s8zh" [8516d216-3857-4702-9656-97c8c91337fc] Running
	I0917 02:26:17.065211    5221 system_pods.go:89] "kube-proxy-xlb2z" [66e8dada-5a23-453e-ba6e-a9146d3467e7] Running
	I0917 02:26:17.065214    5221 system_pods.go:89] "kube-scheduler-multinode-232000" [a38a42a2-e0f9-4c6e-aa99-8dae3f326090] Running
	I0917 02:26:17.065217    5221 system_pods.go:89] "storage-provisioner" [878f83a8-de4f-48b8-98ac-2d34171091ae] Running
	I0917 02:26:17.065222    5221 system_pods.go:126] duration metric: took 205.460498ms to wait for k8s-apps to be running ...
	I0917 02:26:17.065231    5221 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 02:26:17.065289    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:26:17.077400    5221 system_svc.go:56] duration metric: took 12.165489ms WaitForService to wait for kubelet
	I0917 02:26:17.077424    5221 kubeadm.go:582] duration metric: took 14.604578473s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:26:17.077436    5221 node_conditions.go:102] verifying NodePressure condition ...
	I0917 02:26:17.257937    5221 request.go:632] Waited for 180.45319ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes
	I0917 02:26:17.258025    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes
	I0917 02:26:17.258042    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:17.258053    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:17.258060    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:17.260832    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:17.260845    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:17.260850    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:17 GMT
	I0917 02:26:17.260854    5221 round_trippers.go:580]     Audit-Id: 2739f084-3335-4335-8ce7-1a24cb542294
	I0917 02:26:17.260857    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:17.260859    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:17.260868    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:17.260876    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:17.260985    5221 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"937"},"items":[{"metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"934","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14676 chars]
	I0917 02:26:17.261391    5221 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:26:17.261400    5221 node_conditions.go:123] node cpu capacity is 2
	I0917 02:26:17.261407    5221 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:26:17.261410    5221 node_conditions.go:123] node cpu capacity is 2
	I0917 02:26:17.261413    5221 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:26:17.261415    5221 node_conditions.go:123] node cpu capacity is 2
	I0917 02:26:17.261418    5221 node_conditions.go:105] duration metric: took 183.977806ms to run NodePressure ...
	I0917 02:26:17.261425    5221 start.go:241] waiting for startup goroutines ...
	I0917 02:26:17.261430    5221 start.go:246] waiting for cluster config update ...
	I0917 02:26:17.261436    5221 start.go:255] writing updated cluster config ...
	I0917 02:26:17.283156    5221 out.go:201] 
	I0917 02:26:17.305215    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:26:17.305357    5221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/config.json ...
	I0917 02:26:17.327895    5221 out.go:177] * Starting "multinode-232000-m02" worker node in "multinode-232000" cluster
	I0917 02:26:17.370004    5221 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:26:17.370070    5221 cache.go:56] Caching tarball of preloaded images
	I0917 02:26:17.370252    5221 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:26:17.370270    5221 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:26:17.370405    5221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/config.json ...
	I0917 02:26:17.371558    5221 start.go:360] acquireMachinesLock for multinode-232000-m02: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:26:17.371654    5221 start.go:364] duration metric: took 74.026µs to acquireMachinesLock for "multinode-232000-m02"
	I0917 02:26:17.371699    5221 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:26:17.371706    5221 fix.go:54] fixHost starting: m02
	I0917 02:26:17.372148    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:26:17.372173    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:26:17.381644    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53453
	I0917 02:26:17.381974    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:26:17.382347    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:26:17.382362    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:26:17.382644    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:26:17.382777    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:26:17.382873    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetState
	I0917 02:26:17.382948    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:26:17.383030    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | hyperkit pid from json: 4823
	I0917 02:26:17.383936    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | hyperkit pid 4823 missing from process table
	I0917 02:26:17.383964    5221 fix.go:112] recreateIfNeeded on multinode-232000-m02: state=Stopped err=<nil>
	I0917 02:26:17.383972    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	W0917 02:26:17.384057    5221 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:26:17.404872    5221 out.go:177] * Restarting existing hyperkit VM for "multinode-232000-m02" ...
	I0917 02:26:17.446913    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .Start
	I0917 02:26:17.447212    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:26:17.447238    5221 main.go:141] libmachine: (multinode-232000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/hyperkit.pid
	I0917 02:26:17.448962    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | hyperkit pid 4823 missing from process table
	I0917 02:26:17.448978    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | pid 4823 is in state "Stopped"
	I0917 02:26:17.448998    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/hyperkit.pid...
	I0917 02:26:17.449328    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | Using UUID b4bb9835-5d54-4974-9049-06fa7b3612bb
	I0917 02:26:17.474940    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | Generated MAC 66:f1:ae:9f:da:63
	I0917 02:26:17.474963    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-232000
	I0917 02:26:17.475112    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b4bb9835-5d54-4974-9049-06fa7b3612bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0917 02:26:17.475146    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b4bb9835-5d54-4974-9049-06fa7b3612bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0917 02:26:17.475192    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b4bb9835-5d54-4974-9049-06fa7b3612bb", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/multinode-232000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/bzimage,/Users/j
enkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-232000"}
	I0917 02:26:17.475233    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b4bb9835-5d54-4974-9049-06fa7b3612bb -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/multinode-232000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/mult
inode-232000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-232000"
	I0917 02:26:17.475261    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:26:17.476624    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 DEBUG: hyperkit: Pid is 5269
	I0917 02:26:17.477107    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | Attempt 0
	I0917 02:26:17.477117    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:26:17.477198    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | hyperkit pid from json: 5269
	I0917 02:26:17.479226    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | Searching for 66:f1:ae:9f:da:63 in /var/db/dhcpd_leases ...
	I0917 02:26:17.479282    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0917 02:26:17.479309    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9c80}
	I0917 02:26:17.479325    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66e94ae5}
	I0917 02:26:17.479340    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9bd8}
	I0917 02:26:17.479352    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | Found match: 66:f1:ae:9f:da:63
	I0917 02:26:17.479374    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | IP: 192.169.0.15
	I0917 02:26:17.479428    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetConfigRaw
	I0917 02:26:17.480194    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetIP
	I0917 02:26:17.480405    5221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/config.json ...
	I0917 02:26:17.480807    5221 machine.go:93] provisionDockerMachine start ...
	I0917 02:26:17.480817    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:26:17.480927    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:17.481023    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:17.481124    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:17.481257    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:17.481354    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:17.481479    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:26:17.481637    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0917 02:26:17.481644    5221 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:26:17.484653    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:26:17.492711    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:26:17.493633    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:26:17.493651    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:26:17.493672    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:26:17.493686    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:26:17.879934    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:26:17.879949    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:26:17.994816    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:26:17.994833    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:26:17.994842    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:26:17.994851    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:26:17.995685    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:26:17.995694    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:26:23.608811    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 02:26:23.608857    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 02:26:23.608876    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 02:26:23.633865    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:23 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 02:26:28.556191    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:26:28.556216    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetMachineName
	I0917 02:26:28.556393    5221 buildroot.go:166] provisioning hostname "multinode-232000-m02"
	I0917 02:26:28.556402    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetMachineName
	I0917 02:26:28.556506    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:28.556598    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:28.556703    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:28.556798    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:28.556921    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:28.557063    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:26:28.557211    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0917 02:26:28.557219    5221 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-232000-m02 && echo "multinode-232000-m02" | sudo tee /etc/hostname
	I0917 02:26:28.632252    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-232000-m02
	
	I0917 02:26:28.632265    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:28.632409    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:28.632512    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:28.632609    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:28.632718    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:28.632874    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:26:28.633027    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0917 02:26:28.633039    5221 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-232000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-232000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-232000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:26:28.710080    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:26:28.710103    5221 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:26:28.710114    5221 buildroot.go:174] setting up certificates
	I0917 02:26:28.710131    5221 provision.go:84] configureAuth start
	I0917 02:26:28.710141    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetMachineName
	I0917 02:26:28.710271    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetIP
	I0917 02:26:28.710388    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:28.710467    5221 provision.go:143] copyHostCerts
	I0917 02:26:28.710497    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:26:28.710544    5221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:26:28.710549    5221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:26:28.710779    5221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:26:28.710990    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:26:28.711021    5221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:26:28.711026    5221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:26:28.711124    5221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:26:28.711271    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:26:28.711299    5221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:26:28.711304    5221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:26:28.711401    5221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:26:28.711570    5221 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.multinode-232000-m02 san=[127.0.0.1 192.169.0.15 localhost minikube multinode-232000-m02]
	I0917 02:26:28.767847    5221 provision.go:177] copyRemoteCerts
	I0917 02:26:28.767904    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:26:28.767932    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:28.768077    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:28.768180    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:28.768277    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:28.768362    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/id_rsa Username:docker}
	I0917 02:26:28.807655    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:26:28.807726    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:26:28.827158    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:26:28.827239    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0917 02:26:28.846504    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:26:28.846586    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 02:26:28.866159    5221 provision.go:87] duration metric: took 156.017573ms to configureAuth
	I0917 02:26:28.866173    5221 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:26:28.866339    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:26:28.866353    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:26:28.866487    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:28.866573    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:28.866675    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:28.866761    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:28.866842    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:28.866960    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:26:28.867086    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0917 02:26:28.867094    5221 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:26:28.929765    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:26:28.929778    5221 buildroot.go:70] root file system type: tmpfs
	I0917 02:26:28.929847    5221 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:26:28.929859    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:28.929993    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:28.930076    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:28.930156    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:28.930226    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:28.930358    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:26:28.930490    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0917 02:26:28.930533    5221 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.14"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:26:29.004352    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.14
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:26:29.004372    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:29.004507    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:29.004604    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:29.004705    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:29.004794    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:29.004947    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:26:29.005087    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0917 02:26:29.005099    5221 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:26:30.578097    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:26:30.578112    5221 machine.go:96] duration metric: took 13.097237764s to provisionDockerMachine
	I0917 02:26:30.578120    5221 start.go:293] postStartSetup for "multinode-232000-m02" (driver="hyperkit")
	I0917 02:26:30.578129    5221 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:26:30.578139    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:26:30.578333    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:26:30.578346    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:30.578450    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:30.578541    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:30.578631    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:30.578734    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/id_rsa Username:docker}
	I0917 02:26:30.621027    5221 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:26:30.624319    5221 command_runner.go:130] > NAME=Buildroot
	I0917 02:26:30.624327    5221 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0917 02:26:30.624331    5221 command_runner.go:130] > ID=buildroot
	I0917 02:26:30.624334    5221 command_runner.go:130] > VERSION_ID=2023.02.9
	I0917 02:26:30.624338    5221 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0917 02:26:30.624577    5221 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:26:30.624584    5221 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:26:30.624664    5221 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:26:30.624803    5221 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:26:30.624809    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:26:30.624967    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:26:30.632397    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:26:30.658340    5221 start.go:296] duration metric: took 80.207794ms for postStartSetup
	I0917 02:26:30.658367    5221 fix.go:56] duration metric: took 13.286595326s for fixHost
	I0917 02:26:30.658381    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:30.658518    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:30.658619    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:30.658712    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:30.658802    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:30.658947    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:26:30.659081    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0917 02:26:30.659088    5221 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:26:30.724970    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726565190.868042565
	
	I0917 02:26:30.724981    5221 fix.go:216] guest clock: 1726565190.868042565
	I0917 02:26:30.724987    5221 fix.go:229] Guest: 2024-09-17 02:26:30.868042565 -0700 PDT Remote: 2024-09-17 02:26:30.658372 -0700 PDT m=+79.726730067 (delta=209.670565ms)
	I0917 02:26:30.724998    5221 fix.go:200] guest clock delta is within tolerance: 209.670565ms
	I0917 02:26:30.725002    5221 start.go:83] releasing machines lock for "multinode-232000-m02", held for 13.353263705s
	I0917 02:26:30.725019    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:26:30.725145    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetIP
	I0917 02:26:30.750563    5221 out.go:177] * Found network options:
	I0917 02:26:30.771576    5221 out.go:177]   - NO_PROXY=192.169.0.14
	W0917 02:26:30.792469    5221 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:26:30.792511    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:26:30.793320    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:26:30.793625    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:26:30.793774    5221 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:26:30.793812    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	W0917 02:26:30.793892    5221 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:26:30.793999    5221 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:26:30.794002    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:30.794019    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:30.794233    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:30.794235    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:30.794447    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:30.794508    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:30.794622    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/id_rsa Username:docker}
	I0917 02:26:30.794660    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:30.794783    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/id_rsa Username:docker}
	I0917 02:26:30.831201    5221 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0917 02:26:30.831244    5221 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:26:30.831311    5221 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:26:30.884480    5221 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0917 02:26:30.884564    5221 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0917 02:26:30.884592    5221 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:26:30.884606    5221 start.go:495] detecting cgroup driver to use...
	I0917 02:26:30.884753    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:26:30.900568    5221 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0917 02:26:30.900827    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:26:30.909247    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:26:30.917752    5221 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:26:30.917806    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:26:30.926220    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:26:30.934462    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:26:30.942603    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:26:30.951114    5221 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:26:30.959472    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:26:30.968017    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:26:30.976360    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:26:30.984769    5221 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:26:30.992045    5221 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0917 02:26:30.992145    5221 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:26:30.999749    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:26:31.093570    5221 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:26:31.113288    5221 start.go:495] detecting cgroup driver to use...
	I0917 02:26:31.113365    5221 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:26:31.129793    5221 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0917 02:26:31.130343    5221 command_runner.go:130] > [Unit]
	I0917 02:26:31.130352    5221 command_runner.go:130] > Description=Docker Application Container Engine
	I0917 02:26:31.130357    5221 command_runner.go:130] > Documentation=https://docs.docker.com
	I0917 02:26:31.130362    5221 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0917 02:26:31.130367    5221 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0917 02:26:31.130373    5221 command_runner.go:130] > StartLimitBurst=3
	I0917 02:26:31.130377    5221 command_runner.go:130] > StartLimitIntervalSec=60
	I0917 02:26:31.130380    5221 command_runner.go:130] > [Service]
	I0917 02:26:31.130384    5221 command_runner.go:130] > Type=notify
	I0917 02:26:31.130387    5221 command_runner.go:130] > Restart=on-failure
	I0917 02:26:31.130391    5221 command_runner.go:130] > Environment=NO_PROXY=192.169.0.14
	I0917 02:26:31.130397    5221 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0917 02:26:31.130407    5221 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0917 02:26:31.130413    5221 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0917 02:26:31.130418    5221 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0917 02:26:31.130424    5221 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0917 02:26:31.130429    5221 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0917 02:26:31.130437    5221 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0917 02:26:31.130450    5221 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0917 02:26:31.130456    5221 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0917 02:26:31.130460    5221 command_runner.go:130] > ExecStart=
	I0917 02:26:31.130473    5221 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0917 02:26:31.130479    5221 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0917 02:26:31.130486    5221 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0917 02:26:31.130491    5221 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0917 02:26:31.130495    5221 command_runner.go:130] > LimitNOFILE=infinity
	I0917 02:26:31.130498    5221 command_runner.go:130] > LimitNPROC=infinity
	I0917 02:26:31.130501    5221 command_runner.go:130] > LimitCORE=infinity
	I0917 02:26:31.130506    5221 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0917 02:26:31.130511    5221 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0917 02:26:31.130515    5221 command_runner.go:130] > TasksMax=infinity
	I0917 02:26:31.130519    5221 command_runner.go:130] > TimeoutStartSec=0
	I0917 02:26:31.130524    5221 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0917 02:26:31.130528    5221 command_runner.go:130] > Delegate=yes
	I0917 02:26:31.130533    5221 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0917 02:26:31.130540    5221 command_runner.go:130] > KillMode=process
	I0917 02:26:31.130544    5221 command_runner.go:130] > [Install]
	I0917 02:26:31.130548    5221 command_runner.go:130] > WantedBy=multi-user.target
	I0917 02:26:31.130974    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:26:31.147559    5221 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:26:31.165969    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:26:31.176642    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:26:31.187736    5221 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:26:31.212484    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:26:31.223903    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:26:31.239009    5221 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0917 02:26:31.239082    5221 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:26:31.242058    5221 command_runner.go:130] > /usr/bin/cri-dockerd
	I0917 02:26:31.242223    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:26:31.249613    5221 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:26:31.263010    5221 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:26:31.361860    5221 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:26:31.471436    5221 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:26:31.471459    5221 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:26:31.485353    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:26:31.575231    5221 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:26:33.866979    5221 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.291718764s)
	I0917 02:26:33.867054    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:26:33.877320    5221 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:26:33.890130    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:26:33.900643    5221 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:26:34.003947    5221 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:26:34.111645    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:26:34.213673    5221 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:26:34.228040    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:26:34.239007    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:26:34.333302    5221 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:26:34.394346    5221 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:26:34.394419    5221 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:26:34.400434    5221 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0917 02:26:34.400458    5221 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0917 02:26:34.400478    5221 command_runner.go:130] > Device: 0,22	Inode: 753         Links: 1
	I0917 02:26:34.400487    5221 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0917 02:26:34.400494    5221 command_runner.go:130] > Access: 2024-09-17 09:26:34.490657603 +0000
	I0917 02:26:34.400506    5221 command_runner.go:130] > Modify: 2024-09-17 09:26:34.490657603 +0000
	I0917 02:26:34.400514    5221 command_runner.go:130] > Change: 2024-09-17 09:26:34.492657418 +0000
	I0917 02:26:34.400519    5221 command_runner.go:130] >  Birth: -
	I0917 02:26:34.400540    5221 start.go:563] Will wait 60s for crictl version
	I0917 02:26:34.400600    5221 ssh_runner.go:195] Run: which crictl
	I0917 02:26:34.404203    5221 command_runner.go:130] > /usr/bin/crictl
	I0917 02:26:34.404403    5221 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:26:34.428361    5221 command_runner.go:130] > Version:  0.1.0
	I0917 02:26:34.428373    5221 command_runner.go:130] > RuntimeName:  docker
	I0917 02:26:34.428378    5221 command_runner.go:130] > RuntimeVersion:  27.2.1
	I0917 02:26:34.428382    5221 command_runner.go:130] > RuntimeApiVersion:  v1
	I0917 02:26:34.429325    5221 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:26:34.429417    5221 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:26:34.445620    5221 command_runner.go:130] > 27.2.1
	I0917 02:26:34.446459    5221 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:26:34.461220    5221 command_runner.go:130] > 27.2.1
	I0917 02:26:34.507338    5221 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:26:34.528162    5221 out.go:177]   - env NO_PROXY=192.169.0.14
	I0917 02:26:34.549250    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetIP
	I0917 02:26:34.549631    5221 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:26:34.553785    5221 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:26:34.564128    5221 mustload.go:65] Loading cluster: multinode-232000
	I0917 02:26:34.564298    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:26:34.564519    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:26:34.564542    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:26:34.573114    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53474
	I0917 02:26:34.573439    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:26:34.573801    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:26:34.573822    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:26:34.574058    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:26:34.574195    5221 main.go:141] libmachine: (multinode-232000) Calling .GetState
	I0917 02:26:34.574285    5221 main.go:141] libmachine: (multinode-232000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:26:34.574351    5221 main.go:141] libmachine: (multinode-232000) DBG | hyperkit pid from json: 5233
	I0917 02:26:34.575311    5221 host.go:66] Checking if "multinode-232000" exists ...
	I0917 02:26:34.575577    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:26:34.575602    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:26:34.584073    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53476
	I0917 02:26:34.584410    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:26:34.584722    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:26:34.584735    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:26:34.584945    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:26:34.585060    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:26:34.585150    5221 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000 for IP: 192.169.0.15
	I0917 02:26:34.585156    5221 certs.go:194] generating shared ca certs ...
	I0917 02:26:34.585168    5221 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:26:34.585314    5221 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:26:34.585369    5221 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:26:34.585378    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:26:34.585402    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:26:34.585420    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:26:34.585437    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:26:34.585511    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:26:34.585561    5221 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:26:34.585571    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:26:34.585610    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:26:34.585643    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:26:34.585677    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:26:34.585740    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:26:34.585778    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:26:34.585798    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:26:34.585816    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:26:34.585840    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:26:34.605627    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:26:34.624677    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:26:34.643935    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:26:34.663544    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:26:34.682566    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:26:34.701196    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:26:34.720066    5221 ssh_runner.go:195] Run: openssl version
	I0917 02:26:34.724235    5221 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0917 02:26:34.724443    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:26:34.733732    5221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:26:34.736968    5221 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:26:34.737105    5221 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:26:34.737158    5221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:26:34.741230    5221 command_runner.go:130] > 51391683
	I0917 02:26:34.741425    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:26:34.750892    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:26:34.760076    5221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:26:34.763382    5221 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:26:34.763486    5221 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:26:34.763534    5221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:26:34.767716    5221 command_runner.go:130] > 3ec20f2e
	I0917 02:26:34.767896    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:26:34.777167    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:26:34.786647    5221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:26:34.790070    5221 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:26:34.790112    5221 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:26:34.790164    5221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:26:34.794472    5221 command_runner.go:130] > b5213941
	I0917 02:26:34.794524    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:26:34.803820    5221 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:26:34.806949    5221 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 02:26:34.807013    5221 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 02:26:34.807045    5221 kubeadm.go:934] updating node {m02 192.169.0.15 8443 v1.31.1 docker false true} ...
	I0917 02:26:34.807107    5221 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-232000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:26:34.807157    5221 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:26:34.815167    5221 command_runner.go:130] > kubeadm
	I0917 02:26:34.815176    5221 command_runner.go:130] > kubectl
	I0917 02:26:34.815180    5221 command_runner.go:130] > kubelet
	I0917 02:26:34.815283    5221 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:26:34.815336    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0917 02:26:34.823647    5221 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0917 02:26:34.837368    5221 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:26:34.850966    5221 ssh_runner.go:195] Run: grep 192.169.0.14	control-plane.minikube.internal$ /etc/hosts
	I0917 02:26:34.853962    5221 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:26:34.864545    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:26:34.968936    5221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:26:34.984578    5221 host.go:66] Checking if "multinode-232000" exists ...
	I0917 02:26:34.984882    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:26:34.984909    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:26:34.994092    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53478
	I0917 02:26:34.994489    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:26:34.994849    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:26:34.994863    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:26:34.995089    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:26:34.995220    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:26:34.995319    5221 start.go:317] joinCluster: &{Name:multinode-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:multinode-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.16 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:26:34.995412    5221 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0917 02:26:34.995434    5221 host.go:66] Checking if "multinode-232000-m02" exists ...
	I0917 02:26:34.995714    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:26:34.995739    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:26:35.004663    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53480
	I0917 02:26:35.005092    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:26:35.005399    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:26:35.005410    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:26:35.005639    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:26:35.005752    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:26:35.005845    5221 mustload.go:65] Loading cluster: multinode-232000
	I0917 02:26:35.006022    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:26:35.006268    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:26:35.006294    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:26:35.015188    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53482
	I0917 02:26:35.015530    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:26:35.015892    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:26:35.015909    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:26:35.016143    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:26:35.016263    5221 main.go:141] libmachine: (multinode-232000) Calling .GetState
	I0917 02:26:35.016347    5221 main.go:141] libmachine: (multinode-232000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:26:35.016429    5221 main.go:141] libmachine: (multinode-232000) DBG | hyperkit pid from json: 5233
	I0917 02:26:35.017415    5221 host.go:66] Checking if "multinode-232000" exists ...
	I0917 02:26:35.017676    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:26:35.017704    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:26:35.026838    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53484
	I0917 02:26:35.027207    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:26:35.027564    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:26:35.027581    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:26:35.027777    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:26:35.027890    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:26:35.027986    5221 api_server.go:166] Checking apiserver status ...
	I0917 02:26:35.028041    5221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:26:35.028052    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:26:35.028139    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:26:35.028243    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:26:35.028330    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:26:35.028416    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/id_rsa Username:docker}
	I0917 02:26:35.067637    5221 command_runner.go:130] > 1651
	I0917 02:26:35.067709    5221 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1651/cgroup
	W0917 02:26:35.076285    5221 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1651/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:26:35.076359    5221 ssh_runner.go:195] Run: ls
	I0917 02:26:35.079921    5221 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0917 02:26:35.083627    5221 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0917 02:26:35.083687    5221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl drain multinode-232000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0917 02:26:35.164216    5221 command_runner.go:130] > node/multinode-232000-m02 cordoned
	I0917 02:26:38.193073    5221 command_runner.go:130] > pod "busybox-7dff88458-8tvvp" has DeletionTimestamp older than 1 seconds, skipping
	I0917 02:26:38.193087    5221 command_runner.go:130] > node/multinode-232000-m02 drained
	I0917 02:26:38.194698    5221 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-bz9gj, kube-system/kube-proxy-8fb4t
	I0917 02:26:38.194803    5221 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl drain multinode-232000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.111089392s)
	I0917 02:26:38.194813    5221 node.go:128] successfully drained node "multinode-232000-m02"
	I0917 02:26:38.194838    5221 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0917 02:26:38.194856    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:38.195024    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:38.195120    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:38.195213    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:38.195283    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/id_rsa Username:docker}
	I0917 02:26:38.292934    5221 command_runner.go:130] ! W0917 09:26:38.440911    1326 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0917 02:26:38.336217    5221 command_runner.go:130] ! W0917 09:26:38.484146    1326 cleanupnode.go:105] [reset] Failed to remove containers: failed to stop running pod adb846aaec844e84568d1e66bb150b22c5064af45b85ce68490175a102fcf711: rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "busybox-7dff88458-8tvvp_default" network: cni config uninitialized
	I0917 02:26:38.338538    5221 command_runner.go:130] > [preflight] Running pre-flight checks
	I0917 02:26:38.338549    5221 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0917 02:26:38.338554    5221 command_runner.go:130] > [reset] Stopping the kubelet service
	I0917 02:26:38.338567    5221 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0917 02:26:38.338580    5221 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0917 02:26:38.338598    5221 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0917 02:26:38.338604    5221 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0917 02:26:38.338611    5221 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0917 02:26:38.338616    5221 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0917 02:26:38.338624    5221 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0917 02:26:38.338630    5221 command_runner.go:130] > to reset your system's IPVS tables.
	I0917 02:26:38.338638    5221 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0917 02:26:38.338651    5221 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0917 02:26:38.338661    5221 node.go:155] successfully reset node "multinode-232000-m02"
	I0917 02:26:38.338924    5221 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:26:38.339125    5221 kapi.go:59] client config for multinode-232000: &rest.Config{Host:"https://192.169.0.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x410b720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:26:38.339401    5221 request.go:1351] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0917 02:26:38.339439    5221 round_trippers.go:463] DELETE https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:38.339444    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:38.339450    5221 round_trippers.go:473]     Content-Type: application/json
	I0917 02:26:38.339454    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:38.339457    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:38.342177    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:38.342187    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:38.342192    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:38.342196    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:38.342199    5221 round_trippers.go:580]     Content-Length: 171
	I0917 02:26:38.342202    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:38 GMT
	I0917 02:26:38.342204    5221 round_trippers.go:580]     Audit-Id: e79ace76-551d-42c7-a3a2-f1570f343321
	I0917 02:26:38.342206    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:38.342208    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:38.342323    5221 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-232000-m02","kind":"nodes","uid":"b0d6988f-c01e-465b-b2df-6e79ea652296"}}
	I0917 02:26:38.342344    5221 node.go:180] successfully deleted node "multinode-232000-m02"
	I0917 02:26:38.342351    5221 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0917 02:26:38.342366    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0917 02:26:38.342378    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:26:38.342522    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:26:38.342644    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:26:38.342740    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:26:38.342825    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/id_rsa Username:docker}
	I0917 02:26:38.448171    5221 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 8aym42.82ssaevmx169fm1f --discovery-token-ca-cert-hash sha256:7d55cb80c4cca3ccf2edf1a375f9019832d41d9684c82d992c80cf0c3419888b 
	I0917 02:26:38.449706    5221 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0917 02:26:38.449725    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8aym42.82ssaevmx169fm1f --discovery-token-ca-cert-hash sha256:7d55cb80c4cca3ccf2edf1a375f9019832d41d9684c82d992c80cf0c3419888b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-232000-m02"
	I0917 02:26:38.482499    5221 command_runner.go:130] > [preflight] Running pre-flight checks
	I0917 02:26:38.557396    5221 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0917 02:26:38.557413    5221 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0917 02:26:38.587915    5221 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 02:26:38.587930    5221 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 02:26:38.587935    5221 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0917 02:26:38.702866    5221 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 02:26:39.215760    5221 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 513.228573ms
	I0917 02:26:39.215775    5221 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0917 02:26:40.227022    5221 command_runner.go:130] > This node has joined the cluster:
	I0917 02:26:40.227037    5221 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0917 02:26:40.227043    5221 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0917 02:26:40.227048    5221 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0917 02:26:40.228914    5221 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 02:26:40.229057    5221 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8aym42.82ssaevmx169fm1f --discovery-token-ca-cert-hash sha256:7d55cb80c4cca3ccf2edf1a375f9019832d41d9684c82d992c80cf0c3419888b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-232000-m02": (1.779310241s)
	I0917 02:26:40.229078    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0917 02:26:40.449219    5221 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0917 02:26:40.449315    5221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-232000-m02 minikube.k8s.io/updated_at=2024_09_17T02_26_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61 minikube.k8s.io/name=multinode-232000 minikube.k8s.io/primary=false
	I0917 02:26:40.533346    5221 command_runner.go:130] > node/multinode-232000-m02 labeled
	I0917 02:26:40.533368    5221 start.go:319] duration metric: took 5.538024221s to joinCluster
	I0917 02:26:40.533402    5221 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0917 02:26:40.533622    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:26:40.556930    5221 out.go:177] * Verifying Kubernetes components...
	I0917 02:26:40.598764    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:26:40.690186    5221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:26:40.703527    5221 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:26:40.703718    5221 kapi.go:59] client config for multinode-232000: &rest.Config{Host:"https://192.169.0.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x410b720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:26:40.703914    5221 node_ready.go:35] waiting up to 6m0s for node "multinode-232000-m02" to be "Ready" ...
	I0917 02:26:40.703964    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:40.703969    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:40.703974    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:40.703979    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:40.705513    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:40.705522    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:40.705528    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:40.705532    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:40.705535    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:40.705551    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:40 GMT
	I0917 02:26:40.705559    5221 round_trippers.go:580]     Audit-Id: e9034616-90ec-437c-97a3-d918ead229a3
	I0917 02:26:40.705561    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:40.705635    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"980","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3564 chars]
	I0917 02:26:41.205042    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:41.205054    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:41.205061    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:41.205064    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:41.207847    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:41.207862    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:41.207868    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:41.207872    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:41.207875    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:41 GMT
	I0917 02:26:41.207878    5221 round_trippers.go:580]     Audit-Id: c6b8f0c2-bdfa-4f42-9bcd-a1d3f8563e06
	I0917 02:26:41.207880    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:41.207883    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:41.208045    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"980","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3564 chars]
	I0917 02:26:41.704598    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:41.704611    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:41.704617    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:41.704622    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:41.706638    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:41.706650    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:41.706654    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:41.706657    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:41 GMT
	I0917 02:26:41.706659    5221 round_trippers.go:580]     Audit-Id: 450959da-e275-4be9-8e1c-88a712a8a297
	I0917 02:26:41.706662    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:41.706664    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:41.706667    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:41.706900    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"980","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3564 chars]
	I0917 02:26:42.205166    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:42.205180    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:42.205186    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:42.205189    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:42.206905    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:42.206915    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:42.206921    5221 round_trippers.go:580]     Audit-Id: ab379873-93db-4948-aed0-622077ccb5b3
	I0917 02:26:42.206924    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:42.206926    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:42.206930    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:42.206939    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:42.206942    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:42 GMT
	I0917 02:26:42.207229    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:42.704754    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:42.704781    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:42.704792    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:42.704797    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:42.707400    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:42.707416    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:42.707423    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:42.707428    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:42 GMT
	I0917 02:26:42.707433    5221 round_trippers.go:580]     Audit-Id: b8237b06-1cee-42bc-acbb-0febe5fbdda1
	I0917 02:26:42.707436    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:42.707439    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:42.707442    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:42.707591    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:42.707812    5221 node_ready.go:53] node "multinode-232000-m02" has status "Ready":"False"
	I0917 02:26:43.206206    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:43.206233    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:43.206288    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:43.206301    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:43.210590    5221 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:26:43.210614    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:43.210621    5221 round_trippers.go:580]     Audit-Id: 9f069409-4e29-4849-bc2e-b27f90cbb81e
	I0917 02:26:43.210624    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:43.210627    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:43.210642    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:43.210649    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:43.210674    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:43 GMT
	I0917 02:26:43.210728    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:43.704522    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:43.704550    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:43.704562    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:43.704577    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:43.707127    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:43.707144    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:43.707154    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:43.707159    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:43.707164    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:43 GMT
	I0917 02:26:43.707167    5221 round_trippers.go:580]     Audit-Id: 175ec7a3-76b6-4ac4-948d-d1ec35a8370e
	I0917 02:26:43.707170    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:43.707174    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:43.707515    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:44.204276    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:44.204296    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:44.204307    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:44.204315    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:44.206349    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:44.206365    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:44.206375    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:44.206382    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:44.206388    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:44 GMT
	I0917 02:26:44.206393    5221 round_trippers.go:580]     Audit-Id: 4afbce11-1036-4327-8289-01a805771094
	I0917 02:26:44.206437    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:44.206447    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:44.206620    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:44.706137    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:44.706196    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:44.706207    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:44.706214    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:44.709595    5221 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:26:44.709610    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:44.709617    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:44 GMT
	I0917 02:26:44.709622    5221 round_trippers.go:580]     Audit-Id: d919e900-7db4-4738-8c26-ef42edd87761
	I0917 02:26:44.709625    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:44.709630    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:44.709641    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:44.709648    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:44.709758    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:44.709987    5221 node_ready.go:53] node "multinode-232000-m02" has status "Ready":"False"
	I0917 02:26:45.204865    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:45.204884    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:45.204895    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:45.204902    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:45.207236    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:45.207251    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:45.207258    5221 round_trippers.go:580]     Audit-Id: 4f05de8d-7d0d-4db6-bc65-2a86b66d44e1
	I0917 02:26:45.207263    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:45.207267    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:45.207271    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:45.207274    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:45.207277    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:45 GMT
	I0917 02:26:45.207387    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:45.705028    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:45.705058    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:45.705072    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:45.705081    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:45.707870    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:45.707885    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:45.707892    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:45.707897    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:45.707901    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:45.707905    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:45.707908    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:45 GMT
	I0917 02:26:45.707911    5221 round_trippers.go:580]     Audit-Id: cab2b667-95a2-4fa7-9823-9feb0bf49a7f
	I0917 02:26:45.708054    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:46.205958    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:46.206023    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:46.206081    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:46.206095    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:46.208712    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:46.208725    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:46.208731    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:46 GMT
	I0917 02:26:46.208736    5221 round_trippers.go:580]     Audit-Id: 1f2133c4-73b0-4118-9b54-2acbbd6468d5
	I0917 02:26:46.208740    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:46.208743    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:46.208748    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:46.208752    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:46.208874    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:46.706203    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:46.706234    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:46.706247    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:46.706254    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:46.708936    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:46.708953    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:46.708962    5221 round_trippers.go:580]     Audit-Id: 12806bb7-0594-4493-9672-25343dd3f338
	I0917 02:26:46.708971    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:46.708976    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:46.708979    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:46.708982    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:46.708986    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:46 GMT
	I0917 02:26:46.709119    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:47.205588    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:47.205604    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:47.205613    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:47.205617    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:47.207789    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:47.207801    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:47.207810    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:47.207814    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:47 GMT
	I0917 02:26:47.207817    5221 round_trippers.go:580]     Audit-Id: 1ebccb42-0f35-48e5-8d90-13029f3c23b1
	I0917 02:26:47.207820    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:47.207822    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:47.207825    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:47.208016    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:47.208208    5221 node_ready.go:53] node "multinode-232000-m02" has status "Ready":"False"
	I0917 02:26:47.705503    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:47.705532    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:47.705544    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:47.705551    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:47.708306    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:47.708328    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:47.708338    5221 round_trippers.go:580]     Audit-Id: 216a3399-f645-4943-8744-e2b320ec60bd
	I0917 02:26:47.708345    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:47.708351    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:47.708358    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:47.708367    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:47.708377    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:47 GMT
	I0917 02:26:47.708481    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:48.205451    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:48.205498    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:48.205509    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:48.205529    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:48.207304    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:48.207318    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:48.207323    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:48.207326    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:48.207328    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:48.207330    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:48 GMT
	I0917 02:26:48.207332    5221 round_trippers.go:580]     Audit-Id: bec87cad-1e4b-475c-bc6c-af883efcaa5c
	I0917 02:26:48.207335    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:48.207459    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:48.704919    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:48.704941    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:48.704953    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:48.704960    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:48.707743    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:48.707759    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:48.707766    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:48.707770    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:48.707775    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:48.707779    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:48 GMT
	I0917 02:26:48.707782    5221 round_trippers.go:580]     Audit-Id: fd08b290-f6db-4928-b6fd-5c0355dee24f
	I0917 02:26:48.707786    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:48.707935    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:49.204628    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:49.204654    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:49.204666    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:49.204673    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:49.207587    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:49.207604    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:49.207610    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:49.207615    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:49 GMT
	I0917 02:26:49.207618    5221 round_trippers.go:580]     Audit-Id: a11acfcb-5ab4-4229-b0d8-a32cafd6295d
	I0917 02:26:49.207636    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:49.207642    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:49.207646    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:49.207710    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:49.706162    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:49.706216    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:49.706229    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:49.706237    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:49.708948    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:49.708964    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:49.708970    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:49.708974    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:49.708977    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:49 GMT
	I0917 02:26:49.708981    5221 round_trippers.go:580]     Audit-Id: de84c1db-b516-416a-92ec-b1e8a2ffc5b9
	I0917 02:26:49.708985    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:49.708988    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:49.709091    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:49.709325    5221 node_ready.go:53] node "multinode-232000-m02" has status "Ready":"False"
	I0917 02:26:50.205145    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:50.205167    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:50.205179    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:50.205185    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:50.207791    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:50.207807    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:50.207814    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:50 GMT
	I0917 02:26:50.207818    5221 round_trippers.go:580]     Audit-Id: 661d91e3-ca0a-49c6-ad0a-98089ee256dc
	I0917 02:26:50.207837    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:50.207849    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:50.207854    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:50.207860    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:50.207971    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:50.706210    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:50.706237    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:50.706249    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:50.706254    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:50.709206    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:50.709231    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:50.709239    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:50 GMT
	I0917 02:26:50.709244    5221 round_trippers.go:580]     Audit-Id: 066c91b0-8955-452c-a6a4-1fd5d4cb52c1
	I0917 02:26:50.709248    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:50.709251    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:50.709256    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:50.709260    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:50.709614    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:51.205828    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:51.205853    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:51.205864    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:51.205869    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:51.208730    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:51.208749    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:51.208757    5221 round_trippers.go:580]     Audit-Id: a680a971-b3db-4a40-b236-751e281f0c10
	I0917 02:26:51.208778    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:51.208787    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:51.208791    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:51.208799    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:51.208803    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:51 GMT
	I0917 02:26:51.209057    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:51.704733    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:51.704759    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:51.704771    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:51.704787    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:51.707494    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:51.707520    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:51.707536    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:51 GMT
	I0917 02:26:51.707551    5221 round_trippers.go:580]     Audit-Id: 1c60bd6d-6119-45b2-8b03-b68f4fdfefc1
	I0917 02:26:51.707561    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:51.707564    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:51.707569    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:51.707572    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:51.707928    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:52.206016    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:52.206057    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:52.206068    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:52.206076    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:52.208749    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:52.208766    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:52.208787    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:52.208802    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:52.208811    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:52.208817    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:52.208821    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:52 GMT
	I0917 02:26:52.208825    5221 round_trippers.go:580]     Audit-Id: bd24169b-45b6-49b1-b352-c23101412f71
	I0917 02:26:52.209147    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:52.209381    5221 node_ready.go:53] node "multinode-232000-m02" has status "Ready":"False"
	I0917 02:26:52.704751    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:52.704777    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:52.704789    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:52.704797    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:52.707436    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:52.707452    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:52.707461    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:52.707466    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:52.707471    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:52.707482    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:52 GMT
	I0917 02:26:52.707487    5221 round_trippers.go:580]     Audit-Id: 1965a618-b2f5-4b72-89ec-7ec58c288586
	I0917 02:26:52.707490    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:52.707552    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:53.204650    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:53.204670    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:53.204695    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:53.204704    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:53.206425    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:53.206437    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:53.206443    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:53.206446    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:53.206449    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:53 GMT
	I0917 02:26:53.206451    5221 round_trippers.go:580]     Audit-Id: 5f47ce88-be06-470e-8276-4a5c0bf159e6
	I0917 02:26:53.206453    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:53.206456    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:53.206558    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:53.704222    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:53.704280    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:53.704296    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:53.704310    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:53.706876    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:53.706897    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:53.706908    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:53.706913    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:53.706935    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:53.706940    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:53 GMT
	I0917 02:26:53.706943    5221 round_trippers.go:580]     Audit-Id: 09381c89-2fbd-4747-b0e0-8e2517fdd396
	I0917 02:26:53.706946    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:53.707195    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:54.205593    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:54.205606    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.205613    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.205616    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.206954    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:54.206966    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.206972    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.206975    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.206979    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.206981    5221 round_trippers.go:580]     Audit-Id: b69f3ebc-ce49-4646-980b-04f4a53c14f8
	I0917 02:26:54.206984    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.206987    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.207107    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:54.706281    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:54.706309    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.706361    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.706373    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.708895    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:54.708915    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.708922    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.708927    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.708951    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.708960    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.708963    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.708967    5221 round_trippers.go:580]     Audit-Id: c398d8de-3b2b-4ae2-986b-7f6884235f5d
	I0917 02:26:54.709060    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1027","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3932 chars]
	I0917 02:26:54.709297    5221 node_ready.go:49] node "multinode-232000-m02" has status "Ready":"True"
	I0917 02:26:54.709309    5221 node_ready.go:38] duration metric: took 14.005321244s for node "multinode-232000-m02" to be "Ready" ...
	I0917 02:26:54.709316    5221 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:26:54.709367    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0917 02:26:54.709374    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.709382    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.709387    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.711895    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:54.711906    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.711911    5221 round_trippers.go:580]     Audit-Id: 65810a52-6b1a-4681-9c14-47de07b164ab
	I0917 02:26:54.711915    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.711918    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.711922    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.711925    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.711928    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.712717    5221 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1027"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"926","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 89364 chars]
	I0917 02:26:54.714652    5221 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:54.714700    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hr8rd
	I0917 02:26:54.714705    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.714710    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.714714    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.715948    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:54.715959    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.715965    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.715967    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.715971    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.715978    5221 round_trippers.go:580]     Audit-Id: 11a52131-ec17-4cd6-9d95-dc6af5a8f9ad
	I0917 02:26:54.715989    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.716000    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.716173    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"926","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7039 chars]
	I0917 02:26:54.716435    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:54.716442    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.716451    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.716456    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.717441    5221 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 02:26:54.717451    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.717462    5221 round_trippers.go:580]     Audit-Id: 5743721b-89a4-4f23-baee-f74e75914f89
	I0917 02:26:54.717471    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.717477    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.717480    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.717483    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.717487    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.717595    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"934","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5172 chars]
	I0917 02:26:54.717764    5221 pod_ready.go:93] pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:54.717771    5221 pod_ready.go:82] duration metric: took 3.109535ms for pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:54.717777    5221 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:54.717813    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-232000
	I0917 02:26:54.717818    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.717826    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.717830    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.718847    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:54.718855    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.718861    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.718865    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.718868    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.718870    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.718872    5221 round_trippers.go:580]     Audit-Id: 52ce59f6-74f6-43c0-9fe8-101666220ed8
	I0917 02:26:54.718875    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.719114    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-232000","namespace":"kube-system","uid":"023b8525-6267-41df-ab63-f9c82adf3da1","resourceVersion":"895","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"04b75db992fd3241846557ea586378aa","kubernetes.io/config.mirror":"04b75db992fd3241846557ea586378aa","kubernetes.io/config.seen":"2024-09-17T09:21:50.527953777Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6663 chars]
	I0917 02:26:54.719319    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:54.719326    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.719332    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.719336    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.720347    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:54.720354    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.720359    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.720362    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.720365    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.720368    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.720371    5221 round_trippers.go:580]     Audit-Id: bef4ea84-9061-402b-9ff2-93b6b76f44a8
	I0917 02:26:54.720373    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.720475    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"934","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5172 chars]
	I0917 02:26:54.720645    5221 pod_ready.go:93] pod "etcd-multinode-232000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:54.720652    5221 pod_ready.go:82] duration metric: took 2.87192ms for pod "etcd-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:54.720669    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:54.720699    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-232000
	I0917 02:26:54.720703    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.720708    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.720712    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.721675    5221 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 02:26:54.721684    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.721700    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.721707    5221 round_trippers.go:580]     Audit-Id: 425ed1c6-8d21-4c82-830e-dbc18d1e8788
	I0917 02:26:54.721712    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.721716    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.721719    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.721723    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.721818    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-232000","namespace":"kube-system","uid":"4bc0fa4f-4ca7-478d-8b7c-b59c24a56faa","resourceVersion":"899","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.14:8443","kubernetes.io/config.hash":"67a66a13a29b1cc7d88ed81650cbab1c","kubernetes.io/config.mirror":"67a66a13a29b1cc7d88ed81650cbab1c","kubernetes.io/config.seen":"2024-09-17T09:21:50.527954370Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0917 02:26:54.722040    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:54.722046    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.722051    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.722056    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.722928    5221 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 02:26:54.722934    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.722938    5221 round_trippers.go:580]     Audit-Id: 70b33c2d-ed03-4f03-94ed-729d440b127f
	I0917 02:26:54.722948    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.722951    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.722953    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.722956    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.722958    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.723104    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"934","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5172 chars]
	I0917 02:26:54.723271    5221 pod_ready.go:93] pod "kube-apiserver-multinode-232000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:54.723279    5221 pod_ready.go:82] duration metric: took 2.605319ms for pod "kube-apiserver-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:54.723285    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:54.723313    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-232000
	I0917 02:26:54.723320    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.723335    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.723341    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.724439    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:54.724446    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.724451    5221 round_trippers.go:580]     Audit-Id: 7ff0d1c5-f140-4bf1-9475-96a70dce641b
	I0917 02:26:54.724454    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.724456    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.724459    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.724466    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.724469    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.724629    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-232000","namespace":"kube-system","uid":"788e2a30-fcea-4f4c-afc3-52d73d046e1d","resourceVersion":"914","creationTimestamp":"2024-09-17T09:21:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70ae9bf162cfffffa860fd43666d4b44","kubernetes.io/config.mirror":"70ae9bf162cfffffa860fd43666d4b44","kubernetes.io/config.seen":"2024-09-17T09:21:55.992286729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0917 02:26:54.724860    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:54.724867    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.724872    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.724875    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.725742    5221 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 02:26:54.725751    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.725759    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.725763    5221 round_trippers.go:580]     Audit-Id: a524a2ff-b379-4ef9-a11a-100985947566
	I0917 02:26:54.725766    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.725769    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.725771    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.725774    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.725908    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"934","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5172 chars]
	I0917 02:26:54.726083    5221 pod_ready.go:93] pod "kube-controller-manager-multinode-232000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:54.726090    5221 pod_ready.go:82] duration metric: took 2.800799ms for pod "kube-controller-manager-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:54.726099    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8fb4t" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:54.907369    5221 request.go:632] Waited for 181.176649ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fb4t
	I0917 02:26:54.907432    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fb4t
	I0917 02:26:54.907443    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.907453    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.907459    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.910021    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:54.910037    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.910044    5221 round_trippers.go:580]     Audit-Id: 8873ea26-61d5-45b4-99a1-26e711d7fba6
	I0917 02:26:54.910048    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.910052    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.910056    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.910059    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.910065    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:55 GMT
	I0917 02:26:54.910206    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8fb4t","generateName":"kube-proxy-","namespace":"kube-system","uid":"e73b5d46-804f-4a13-a286-f0194436c3fc","resourceVersion":"1006","creationTimestamp":"2024-09-17T09:22:43Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fb6f259a-808b-4cb9-a679-3500f264f1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f259a-808b-4cb9-a679-3500f264f1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0917 02:26:55.107126    5221 request.go:632] Waited for 196.555016ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:55.107211    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:55.107222    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:55.107233    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:55.107240    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:55.109622    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:55.109635    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:55.109642    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:55.109645    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:55.109648    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:55.109652    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:55.109657    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:55 GMT
	I0917 02:26:55.109660    5221 round_trippers.go:580]     Audit-Id: e64bfbe8-9684-4016-930e-e97450ef7e14
	I0917 02:26:55.109844    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1027","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3932 chars]
	I0917 02:26:55.110081    5221 pod_ready.go:93] pod "kube-proxy-8fb4t" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:55.110092    5221 pod_ready.go:82] duration metric: took 383.985246ms for pod "kube-proxy-8fb4t" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:55.110100    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9s8zh" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:55.306481    5221 request.go:632] Waited for 196.334498ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9s8zh
	I0917 02:26:55.306535    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9s8zh
	I0917 02:26:55.306541    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:55.306547    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:55.306550    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:55.308155    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:55.308164    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:55.308169    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:55.308172    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:55.308175    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:55.308178    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:55.308180    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:55 GMT
	I0917 02:26:55.308182    5221 round_trippers.go:580]     Audit-Id: 3f765a57-d32d-4cea-bbfa-e83fb9c0627d
	I0917 02:26:55.308309    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9s8zh","generateName":"kube-proxy-","namespace":"kube-system","uid":"8516d216-3857-4702-9656-97c8c91337fc","resourceVersion":"890","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fb6f259a-808b-4cb9-a679-3500f264f1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f259a-808b-4cb9-a679-3500f264f1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6394 chars]
	I0917 02:26:55.507848    5221 request.go:632] Waited for 199.260774ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:55.507923    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:55.507931    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:55.507939    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:55.507948    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:55.509929    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:55.509943    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:55.509951    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:55.509958    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:55.509963    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:55 GMT
	I0917 02:26:55.509968    5221 round_trippers.go:580]     Audit-Id: cb92c8b5-1ddd-43cd-be4c-f0b2ac6cbacb
	I0917 02:26:55.509973    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:55.509977    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:55.510144    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"934","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5172 chars]
	I0917 02:26:55.510346    5221 pod_ready.go:93] pod "kube-proxy-9s8zh" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:55.510355    5221 pod_ready.go:82] duration metric: took 400.247776ms for pod "kube-proxy-9s8zh" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:55.510362    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xlb2z" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:55.707746    5221 request.go:632] Waited for 197.337667ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xlb2z
	I0917 02:26:55.707828    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xlb2z
	I0917 02:26:55.707846    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:55.707859    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:55.707865    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:55.710457    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:55.710472    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:55.710479    5221 round_trippers.go:580]     Audit-Id: 5c4edb6b-48c8-4507-a9e7-40e68cc85f8a
	I0917 02:26:55.710484    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:55.710488    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:55.710493    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:55.710497    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:55.710503    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:55 GMT
	I0917 02:26:55.710609    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xlb2z","generateName":"kube-proxy-","namespace":"kube-system","uid":"66e8dada-5a23-453e-ba6e-a9146d3467e7","resourceVersion":"996","creationTimestamp":"2024-09-17T09:23:37Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fb6f259a-808b-4cb9-a679-3500f264f1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:23:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f259a-808b-4cb9-a679-3500f264f1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6422 chars]
	I0917 02:26:55.908226    5221 request.go:632] Waited for 197.233912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m03
	I0917 02:26:55.908279    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m03
	I0917 02:26:55.908288    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:55.908299    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:55.908307    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:55.910888    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:55.910905    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:55.910912    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:56 GMT
	I0917 02:26:55.910916    5221 round_trippers.go:580]     Audit-Id: 0aac35a3-c1d3-4d6c-aa7b-84fdbbfe27ce
	I0917 02:26:55.910920    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:55.910923    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:55.910926    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:55.910930    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:55.911029    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m03","uid":"ca6d8a0b-78e8-401d-8fd0-21af7b79983d","resourceVersion":"1023","creationTimestamp":"2024-09-17T09:24:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_24_31_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:24:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4399 chars]
	I0917 02:26:55.911296    5221 pod_ready.go:98] node "multinode-232000-m03" hosting pod "kube-proxy-xlb2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000-m03" has status "Ready":"Unknown"
	I0917 02:26:55.911313    5221 pod_ready.go:82] duration metric: took 400.94378ms for pod "kube-proxy-xlb2z" in "kube-system" namespace to be "Ready" ...
	E0917 02:26:55.911322    5221 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-232000-m03" hosting pod "kube-proxy-xlb2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000-m03" has status "Ready":"Unknown"
	I0917 02:26:55.911346    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:56.106469    5221 request.go:632] Waited for 195.008281ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-232000
	I0917 02:26:56.106529    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-232000
	I0917 02:26:56.106538    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:56.106549    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:56.106558    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:56.109150    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:56.109162    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:56.109167    5221 round_trippers.go:580]     Audit-Id: 25bb6df9-481a-4fbe-b913-9420ec1197db
	I0917 02:26:56.109171    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:56.109173    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:56.109176    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:56.109178    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:56.109180    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:56 GMT
	I0917 02:26:56.109357    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-232000","namespace":"kube-system","uid":"a38a42a2-e0f9-4c6e-aa99-8dae3f326090","resourceVersion":"910","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6ab4348c7fba41d6fa49c901ef5e8acf","kubernetes.io/config.mirror":"6ab4348c7fba41d6fa49c901ef5e8acf","kubernetes.io/config.seen":"2024-09-17T09:21:50.527953069Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0917 02:26:56.308345    5221 request.go:632] Waited for 198.710667ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:56.308416    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:56.308458    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:56.308476    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:56.308484    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:56.310835    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:56.310855    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:56.310863    5221 round_trippers.go:580]     Audit-Id: 320b3fc2-7426-4a90-b0c3-b33f2fdfef24
	I0917 02:26:56.310878    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:56.310882    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:56.310886    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:56.310889    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:56.310894    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:56 GMT
	I0917 02:26:56.311210    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"934","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5172 chars]
	I0917 02:26:56.311400    5221 pod_ready.go:93] pod "kube-scheduler-multinode-232000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:56.311409    5221 pod_ready.go:82] duration metric: took 400.0373ms for pod "kube-scheduler-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:56.311416    5221 pod_ready.go:39] duration metric: took 1.602085565s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:26:56.311428    5221 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 02:26:56.311490    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:26:56.322009    5221 system_svc.go:56] duration metric: took 10.575049ms WaitForService to wait for kubelet
	I0917 02:26:56.322028    5221 kubeadm.go:582] duration metric: took 15.78853698s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:26:56.322045    5221 node_conditions.go:102] verifying NodePressure condition ...
	I0917 02:26:56.506353    5221 request.go:632] Waited for 184.26297ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes
	I0917 02:26:56.506410    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes
	I0917 02:26:56.506415    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:56.506421    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:56.506426    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:56.509766    5221 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:26:56.509781    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:56.509787    5221 round_trippers.go:580]     Audit-Id: 12331aab-1be8-48a8-b6b2-a02524208e8a
	I0917 02:26:56.509792    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:56.509796    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:56.509801    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:56.509814    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:56.509818    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:56 GMT
	I0917 02:26:56.510062    5221 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1027"},"items":[{"metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"934","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 15541 chars]
	I0917 02:26:56.510477    5221 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:26:56.510486    5221 node_conditions.go:123] node cpu capacity is 2
	I0917 02:26:56.510493    5221 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:26:56.510496    5221 node_conditions.go:123] node cpu capacity is 2
	I0917 02:26:56.510499    5221 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:26:56.510501    5221 node_conditions.go:123] node cpu capacity is 2
	I0917 02:26:56.510504    5221 node_conditions.go:105] duration metric: took 188.455004ms to run NodePressure ...
	I0917 02:26:56.510513    5221 start.go:241] waiting for startup goroutines ...
	I0917 02:26:56.510531    5221 start.go:255] writing updated cluster config ...
	I0917 02:26:56.531293    5221 out.go:201] 
	I0917 02:26:56.552169    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:26:56.552271    5221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/config.json ...
	I0917 02:26:56.574096    5221 out.go:177] * Starting "multinode-232000-m03" worker node in "multinode-232000" cluster
	I0917 02:26:56.616005    5221 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:26:56.616032    5221 cache.go:56] Caching tarball of preloaded images
	I0917 02:26:56.616181    5221 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:26:56.616194    5221 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:26:56.616287    5221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/config.json ...
	I0917 02:26:56.617058    5221 start.go:360] acquireMachinesLock for multinode-232000-m03: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:26:56.617135    5221 start.go:364] duration metric: took 58.158µs to acquireMachinesLock for "multinode-232000-m03"
	I0917 02:26:56.617164    5221 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:26:56.617170    5221 fix.go:54] fixHost starting: m03
	I0917 02:26:56.617493    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:26:56.617520    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:26:56.626261    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53490
	I0917 02:26:56.626628    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:26:56.626996    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:26:56.627019    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:26:56.627250    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:26:56.627367    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .DriverName
	I0917 02:26:56.627481    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetState
	I0917 02:26:56.627571    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:26:56.627660    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | hyperkit pid from json: 5155
	I0917 02:26:56.628608    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | hyperkit pid 5155 missing from process table
	I0917 02:26:56.628607    5221 fix.go:112] recreateIfNeeded on multinode-232000-m03: state=Stopped err=<nil>
	I0917 02:26:56.628621    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .DriverName
	W0917 02:26:56.628704    5221 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:26:56.650179    5221 out.go:177] * Restarting existing hyperkit VM for "multinode-232000-m03" ...
	I0917 02:26:56.692010    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .Start
	I0917 02:26:56.692225    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:26:56.692273    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/hyperkit.pid
	I0917 02:26:56.692316    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | Using UUID d1ac9720-c400-4519-b59b-fee993a19e36
	I0917 02:26:56.718521    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | Generated MAC d2:11:43:9a:a8:47
	I0917 02:26:56.718543    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-232000
	I0917 02:26:56.718686    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d1ac9720-c400-4519-b59b-fee993a19e36", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b770)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0917 02:26:56.718712    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d1ac9720-c400-4519-b59b-fee993a19e36", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b770)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0917 02:26:56.718749    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "d1ac9720-c400-4519-b59b-fee993a19e36", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/multinode-232000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/bzimage,/Users/j
enkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-232000"}
	I0917 02:26:56.718788    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U d1ac9720-c400-4519-b59b-fee993a19e36 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/multinode-232000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/mult
inode-232000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-232000"
	I0917 02:26:56.718815    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:26:56.720344    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 DEBUG: hyperkit: Pid is 5295
	I0917 02:26:56.720831    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | Attempt 0
	I0917 02:26:56.720839    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:26:56.720922    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | hyperkit pid from json: 5295
	I0917 02:26:56.722031    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | Searching for d2:11:43:9a:a8:47 in /var/db/dhcpd_leases ...
	I0917 02:26:56.722096    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0917 02:26:56.722112    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9cc2}
	I0917 02:26:56.722144    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9c80}
	I0917 02:26:56.722175    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66e94ae5}
	I0917 02:26:56.722199    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetConfigRaw
	I0917 02:26:56.722195    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | Found match: d2:11:43:9a:a8:47
	I0917 02:26:56.722218    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | IP: 192.169.0.16
	I0917 02:26:56.722850    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetIP
	I0917 02:26:56.723062    5221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/config.json ...
	I0917 02:26:56.723645    5221 machine.go:93] provisionDockerMachine start ...
	I0917 02:26:56.723658    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .DriverName
	I0917 02:26:56.723787    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:26:56.723888    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:26:56.724034    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:26:56.724135    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:26:56.724235    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:26:56.724355    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:26:56.724508    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0917 02:26:56.724514    5221 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:26:56.728260    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:26:56.737031    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:26:56.737901    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:26:56.737915    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:26:56.737922    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:26:56.737942    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:26:57.121903    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:26:57.121918    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:26:57.236666    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:26:57.236681    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:26:57.236689    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:26:57.236698    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:26:57.237536    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:26:57.237546    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:27:02.855427    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:27:02 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:27:02.855495    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:27:02 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:27:02.855506    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:27:02 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:27:02.878415    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:27:02 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:27:07.791740    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:27:07.791764    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetMachineName
	I0917 02:27:07.791894    5221 buildroot.go:166] provisioning hostname "multinode-232000-m03"
	I0917 02:27:07.791903    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetMachineName
	I0917 02:27:07.791995    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:07.792069    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:07.792153    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:07.792230    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:07.792312    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:07.792431    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:27:07.792585    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0917 02:27:07.792593    5221 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-232000-m03 && echo "multinode-232000-m03" | sudo tee /etc/hostname
	I0917 02:27:07.864742    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-232000-m03
	
	I0917 02:27:07.864759    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:07.864886    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:07.864977    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:07.865068    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:07.865165    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:07.865308    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:27:07.865454    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0917 02:27:07.865465    5221 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-232000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-232000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-232000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:27:07.933359    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:27:07.933373    5221 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:27:07.933385    5221 buildroot.go:174] setting up certificates
	I0917 02:27:07.933425    5221 provision.go:84] configureAuth start
	I0917 02:27:07.933432    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetMachineName
	I0917 02:27:07.933562    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetIP
	I0917 02:27:07.933682    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:07.933774    5221 provision.go:143] copyHostCerts
	I0917 02:27:07.933802    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:27:07.933860    5221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:27:07.933866    5221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:27:07.933980    5221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:27:07.934171    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:27:07.934210    5221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:27:07.934215    5221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:27:07.934290    5221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:27:07.934431    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:27:07.934474    5221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:27:07.934485    5221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:27:07.934560    5221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:27:07.934706    5221 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.multinode-232000-m03 san=[127.0.0.1 192.169.0.16 localhost minikube multinode-232000-m03]
	I0917 02:27:08.109556    5221 provision.go:177] copyRemoteCerts
	I0917 02:27:08.109624    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:27:08.109639    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:08.109791    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:08.109895    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:08.110014    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:08.110101    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/id_rsa Username:docker}
	I0917 02:27:08.148644    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:27:08.148715    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0917 02:27:08.170743    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:27:08.170816    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 02:27:08.190414    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:27:08.190477    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:27:08.210572    5221 provision.go:87] duration metric: took 277.137929ms to configureAuth
	I0917 02:27:08.210586    5221 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:27:08.210763    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:27:08.210777    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .DriverName
	I0917 02:27:08.210906    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:08.210999    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:08.211084    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:08.211160    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:08.211235    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:08.211354    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:27:08.211487    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0917 02:27:08.211495    5221 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:27:08.274645    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:27:08.274660    5221 buildroot.go:70] root file system type: tmpfs
	I0917 02:27:08.274730    5221 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:27:08.274739    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:08.274865    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:08.274954    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:08.275039    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:08.275122    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:08.275247    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:27:08.275381    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0917 02:27:08.275425    5221 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.14"
	Environment="NO_PROXY=192.169.0.14,192.169.0.15"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:27:08.347017    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.14
	Environment=NO_PROXY=192.169.0.14,192.169.0.15
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:27:08.347037    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:08.347171    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:08.347271    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:08.347376    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:08.347483    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:08.347637    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:27:08.347777    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0917 02:27:08.347789    5221 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:27:09.913613    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:27:09.913630    5221 machine.go:96] duration metric: took 13.189915264s to provisionDockerMachine
	I0917 02:27:09.913639    5221 start.go:293] postStartSetup for "multinode-232000-m03" (driver="hyperkit")
	I0917 02:27:09.913647    5221 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:27:09.913658    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .DriverName
	I0917 02:27:09.913851    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:27:09.913865    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:09.913960    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:09.914053    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:09.914143    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:09.914233    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/id_rsa Username:docker}
	I0917 02:27:09.951318    5221 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:27:09.954295    5221 command_runner.go:130] > NAME=Buildroot
	I0917 02:27:09.954304    5221 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0917 02:27:09.954308    5221 command_runner.go:130] > ID=buildroot
	I0917 02:27:09.954312    5221 command_runner.go:130] > VERSION_ID=2023.02.9
	I0917 02:27:09.954315    5221 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0917 02:27:09.954478    5221 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:27:09.954487    5221 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:27:09.954582    5221 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:27:09.954755    5221 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:27:09.954761    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:27:09.954962    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:27:09.962189    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:27:09.981823    5221 start.go:296] duration metric: took 68.175521ms for postStartSetup
	I0917 02:27:09.981853    5221 fix.go:56] duration metric: took 13.364613789s for fixHost
	I0917 02:27:09.981867    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:09.981997    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:09.982080    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:09.982170    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:09.982246    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:09.982368    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:27:09.982503    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0917 02:27:09.982510    5221 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:27:10.044353    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726565230.013398915
	
	I0917 02:27:10.044364    5221 fix.go:216] guest clock: 1726565230.013398915
	I0917 02:27:10.044369    5221 fix.go:229] Guest: 2024-09-17 02:27:10.013398915 -0700 PDT Remote: 2024-09-17 02:27:09.981858 -0700 PDT m=+119.050037971 (delta=31.540915ms)
	I0917 02:27:10.044388    5221 fix.go:200] guest clock delta is within tolerance: 31.540915ms
	I0917 02:27:10.044393    5221 start.go:83] releasing machines lock for "multinode-232000-m03", held for 13.427188639s
	I0917 02:27:10.044408    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .DriverName
	I0917 02:27:10.044524    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetIP
	I0917 02:27:10.068467    5221 out.go:177] * Found network options:
	I0917 02:27:10.089147    5221 out.go:177]   - NO_PROXY=192.169.0.14,192.169.0.15
	W0917 02:27:10.110282    5221 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:27:10.110310    5221 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:27:10.110330    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .DriverName
	I0917 02:27:10.110971    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .DriverName
	I0917 02:27:10.111122    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .DriverName
	I0917 02:27:10.111213    5221 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:27:10.111241    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	W0917 02:27:10.111277    5221 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:27:10.111297    5221 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:27:10.111371    5221 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:27:10.111385    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:10.111392    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:10.111568    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:10.111592    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:10.111731    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:10.111772    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:10.111891    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/id_rsa Username:docker}
	I0917 02:27:10.111918    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:10.112051    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/id_rsa Username:docker}
	I0917 02:27:10.148705    5221 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0917 02:27:10.148728    5221 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:27:10.148796    5221 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:27:10.208951    5221 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0917 02:27:10.209004    5221 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0917 02:27:10.209023    5221 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:27:10.209033    5221 start.go:495] detecting cgroup driver to use...
	I0917 02:27:10.209116    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:27:10.224400    5221 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0917 02:27:10.224628    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:27:10.233324    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:27:10.242088    5221 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:27:10.242138    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:27:10.250958    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:27:10.259862    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:27:10.268732    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:27:10.277394    5221 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:27:10.286530    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:27:10.295429    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:27:10.304249    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:27:10.313055    5221 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:27:10.321020    5221 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0917 02:27:10.321157    5221 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:27:10.329565    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:27:10.437126    5221 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:27:10.454772    5221 start.go:495] detecting cgroup driver to use...
	I0917 02:27:10.454854    5221 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:27:10.474710    5221 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0917 02:27:10.475164    5221 command_runner.go:130] > [Unit]
	I0917 02:27:10.475174    5221 command_runner.go:130] > Description=Docker Application Container Engine
	I0917 02:27:10.475179    5221 command_runner.go:130] > Documentation=https://docs.docker.com
	I0917 02:27:10.475198    5221 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0917 02:27:10.475206    5221 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0917 02:27:10.475211    5221 command_runner.go:130] > StartLimitBurst=3
	I0917 02:27:10.475215    5221 command_runner.go:130] > StartLimitIntervalSec=60
	I0917 02:27:10.475218    5221 command_runner.go:130] > [Service]
	I0917 02:27:10.475221    5221 command_runner.go:130] > Type=notify
	I0917 02:27:10.475224    5221 command_runner.go:130] > Restart=on-failure
	I0917 02:27:10.475229    5221 command_runner.go:130] > Environment=NO_PROXY=192.169.0.14
	I0917 02:27:10.475233    5221 command_runner.go:130] > Environment=NO_PROXY=192.169.0.14,192.169.0.15
	I0917 02:27:10.475240    5221 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0917 02:27:10.475250    5221 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0917 02:27:10.475256    5221 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0917 02:27:10.475261    5221 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0917 02:27:10.475267    5221 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0917 02:27:10.475272    5221 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0917 02:27:10.475283    5221 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0917 02:27:10.475289    5221 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0917 02:27:10.475294    5221 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0917 02:27:10.475299    5221 command_runner.go:130] > ExecStart=
	I0917 02:27:10.475312    5221 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0917 02:27:10.475316    5221 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0917 02:27:10.475322    5221 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0917 02:27:10.475331    5221 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0917 02:27:10.475334    5221 command_runner.go:130] > LimitNOFILE=infinity
	I0917 02:27:10.475338    5221 command_runner.go:130] > LimitNPROC=infinity
	I0917 02:27:10.475341    5221 command_runner.go:130] > LimitCORE=infinity
	I0917 02:27:10.475346    5221 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0917 02:27:10.475351    5221 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0917 02:27:10.475354    5221 command_runner.go:130] > TasksMax=infinity
	I0917 02:27:10.475357    5221 command_runner.go:130] > TimeoutStartSec=0
	I0917 02:27:10.475362    5221 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0917 02:27:10.475366    5221 command_runner.go:130] > Delegate=yes
	I0917 02:27:10.475375    5221 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0917 02:27:10.475379    5221 command_runner.go:130] > KillMode=process
	I0917 02:27:10.475382    5221 command_runner.go:130] > [Install]
	I0917 02:27:10.475387    5221 command_runner.go:130] > WantedBy=multi-user.target
	I0917 02:27:10.475467    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:27:10.487090    5221 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:27:10.505533    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:27:10.516787    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:27:10.527672    5221 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:27:10.547133    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:27:10.557296    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:27:10.571773    5221 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0917 02:27:10.572066    5221 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:27:10.574829    5221 command_runner.go:130] > /usr/bin/cri-dockerd
	I0917 02:27:10.575045    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:27:10.582206    5221 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:27:10.595639    5221 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:27:10.704292    5221 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:27:10.818639    5221 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:27:10.818670    5221 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:27:10.832988    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:27:10.931490    5221 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:28:11.810763    5221 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0917 02:28:11.810778    5221 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0917 02:28:11.810852    5221 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.879066143s)
	I0917 02:28:11.810930    5221 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0917 02:28:11.820846    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0917 02:28:11.820860    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:08.587012293Z" level=info msg="Starting up"
	I0917 02:28:11.820868    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:08.587727927Z" level=info msg="containerd not running, starting managed containerd"
	I0917 02:28:11.820881    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:08.588278751Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=494
	I0917 02:28:11.820889    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.604257552Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	I0917 02:28:11.820899    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620120903Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0917 02:28:11.820909    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620146681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0917 02:28:11.820918    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620184469Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0917 02:28:11.820927    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620194716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0917 02:28:11.820937    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620335138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0917 02:28:11.820946    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620374123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0917 02:28:11.820965    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620521898Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0917 02:28:11.820976    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620558023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0917 02:28:11.820987    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620570804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0917 02:28:11.820996    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620578774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0917 02:28:11.821007    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620679363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0917 02:28:11.821016    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620870887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0917 02:28:11.821030    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622470881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0917 02:28:11.821041    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622510433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0917 02:28:11.821141    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622614354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0917 02:28:11.821157    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622647767Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0917 02:28:11.821168    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622750438Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0917 02:28:11.821176    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622793925Z" level=info msg="metadata content store policy set" policy=shared
	I0917 02:28:11.821184    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624278427Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0917 02:28:11.821194    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624325218Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0917 02:28:11.821202    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624338472Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0917 02:28:11.821211    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624348654Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0917 02:28:11.821219    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624360500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0917 02:28:11.821228    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624450205Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0917 02:28:11.821237    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624612298Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0917 02:28:11.821245    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624684799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0917 02:28:11.821254    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624696377Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0917 02:28:11.821263    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624704926Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0917 02:28:11.821273    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624720392Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0917 02:28:11.821284    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624732730Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0917 02:28:11.821294    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624741016Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0917 02:28:11.821302    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624762305Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0917 02:28:11.821311    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624773829Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0917 02:28:11.821320    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624782485Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0917 02:28:11.821471    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624791242Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0917 02:28:11.821487    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624799058Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0917 02:28:11.821509    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624812700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821522    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624821844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821531    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624838386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821540    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624849680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821553    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624860870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821562    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624869678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821571    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624877407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821579    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624885574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821589    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624894140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821597    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624903681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821606    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624911167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821614    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624918808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821627    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624926384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821636    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624935585Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0917 02:28:11.821644    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624951098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821653    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624959500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821662    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624967057Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0917 02:28:11.821671    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624995177Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0917 02:28:11.821683    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625006123Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0917 02:28:11.821693    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625013538Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0917 02:28:11.821772    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625021457Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0917 02:28:11.821785    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625027736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821797    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625037164Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0917 02:28:11.821805    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625044080Z" level=info msg="NRI interface is disabled by configuration."
	I0917 02:28:11.821815    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625194820Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0917 02:28:11.821823    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625267645Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0917 02:28:11.821831    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625321861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0917 02:28:11.821840    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625334867Z" level=info msg="containerd successfully booted in 0.021716s"
	I0917 02:28:11.821848    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.607440214Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0917 02:28:11.821856    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.629515088Z" level=info msg="Loading containers: start."
	I0917 02:28:11.821875    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.728163971Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0917 02:28:11.821885    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.797005402Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0917 02:28:11.821893    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.846572511Z" level=info msg="Loading containers: done."
	I0917 02:28:11.821903    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.854213853Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	I0917 02:28:11.821911    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.854405276Z" level=info msg="Daemon has completed initialization"
	I0917 02:28:11.821919    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.877998533Z" level=info msg="API listen on /var/run/docker.sock"
	I0917 02:28:11.821927    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.878088127Z" level=info msg="API listen on [::]:2376"
	I0917 02:28:11.821934    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 systemd[1]: Started Docker Application Container Engine.
	I0917 02:28:11.821943    5221 command_runner.go:130] > Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.933377209Z" level=info msg="Processing signal 'terminated'"
	I0917 02:28:11.821954    5221 command_runner.go:130] > Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934105331Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0917 02:28:11.821965    5221 command_runner.go:130] > Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934523529Z" level=info msg="Daemon shutdown complete"
	I0917 02:28:11.821978    5221 command_runner.go:130] > Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934593980Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0917 02:28:11.821989    5221 command_runner.go:130] > Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934602401Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0917 02:28:11.822018    5221 command_runner.go:130] > Sep 17 09:27:10 multinode-232000-m03 systemd[1]: Stopping Docker Application Container Engine...
	I0917 02:28:11.822024    5221 command_runner.go:130] > Sep 17 09:27:11 multinode-232000-m03 systemd[1]: docker.service: Deactivated successfully.
	I0917 02:28:11.822033    5221 command_runner.go:130] > Sep 17 09:27:11 multinode-232000-m03 systemd[1]: Stopped Docker Application Container Engine.
	I0917 02:28:11.822039    5221 command_runner.go:130] > Sep 17 09:27:11 multinode-232000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0917 02:28:11.822045    5221 command_runner.go:130] > Sep 17 09:27:11 multinode-232000-m03 dockerd[873]: time="2024-09-17T09:27:11.969616869Z" level=info msg="Starting up"
	I0917 02:28:11.822054    5221 command_runner.go:130] > Sep 17 09:28:11 multinode-232000-m03 dockerd[873]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0917 02:28:11.822063    5221 command_runner.go:130] > Sep 17 09:28:11 multinode-232000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0917 02:28:11.822070    5221 command_runner.go:130] > Sep 17 09:28:11 multinode-232000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0917 02:28:11.822076    5221 command_runner.go:130] > Sep 17 09:28:11 multinode-232000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	I0917 02:28:11.862553    5221 out.go:201] 
	W0917 02:28:11.899560    5221 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 09:27:08 multinode-232000-m03 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 09:27:08 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:08.587012293Z" level=info msg="Starting up"
	Sep 17 09:27:08 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:08.587727927Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 09:27:08 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:08.588278751Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=494
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.604257552Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620120903Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620146681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620184469Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620194716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620335138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620374123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620521898Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620558023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620570804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620578774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620679363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620870887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622470881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622510433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622614354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622647767Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622750438Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622793925Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624278427Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624325218Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624338472Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624348654Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624360500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624450205Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624612298Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624684799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624696377Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624704926Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624720392Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624732730Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624741016Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624762305Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624773829Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624782485Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624791242Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624799058Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624812700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624821844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624838386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624849680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624860870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624869678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624877407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624885574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624894140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624903681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624911167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624918808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624926384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624935585Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624951098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624959500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624967057Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624995177Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625006123Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625013538Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625021457Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625027736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625037164Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625044080Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625194820Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625267645Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625321861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625334867Z" level=info msg="containerd successfully booted in 0.021716s"
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.607440214Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.629515088Z" level=info msg="Loading containers: start."
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.728163971Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.797005402Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.846572511Z" level=info msg="Loading containers: done."
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.854213853Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.854405276Z" level=info msg="Daemon has completed initialization"
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.877998533Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.878088127Z" level=info msg="API listen on [::]:2376"
	Sep 17 09:27:09 multinode-232000-m03 systemd[1]: Started Docker Application Container Engine.
	Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.933377209Z" level=info msg="Processing signal 'terminated'"
	Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934105331Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934523529Z" level=info msg="Daemon shutdown complete"
	Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934593980Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934602401Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 09:27:10 multinode-232000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 09:27:11 multinode-232000-m03 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 09:27:11 multinode-232000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 09:27:11 multinode-232000-m03 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 09:27:11 multinode-232000-m03 dockerd[873]: time="2024-09-17T09:27:11.969616869Z" level=info msg="Starting up"
	Sep 17 09:28:11 multinode-232000-m03 dockerd[873]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 09:28:11 multinode-232000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 09:28:11 multinode-232000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 09:28:11 multinode-232000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 09:27:08 multinode-232000-m03 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 09:27:08 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:08.587012293Z" level=info msg="Starting up"
	Sep 17 09:27:08 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:08.587727927Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 09:27:08 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:08.588278751Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=494
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.604257552Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620120903Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620146681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620184469Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620194716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620335138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620374123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620521898Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620558023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620570804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620578774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620679363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620870887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622470881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622510433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622614354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622647767Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622750438Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622793925Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624278427Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624325218Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624338472Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624348654Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624360500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624450205Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624612298Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624684799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624696377Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624704926Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624720392Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624732730Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624741016Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624762305Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624773829Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624782485Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624791242Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624799058Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624812700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624821844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624838386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624849680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624860870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624869678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624877407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624885574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624894140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624903681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624911167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624918808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624926384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624935585Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624951098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624959500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624967057Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624995177Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625006123Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625013538Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625021457Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625027736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625037164Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625044080Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625194820Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625267645Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625321861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625334867Z" level=info msg="containerd successfully booted in 0.021716s"
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.607440214Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.629515088Z" level=info msg="Loading containers: start."
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.728163971Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.797005402Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.846572511Z" level=info msg="Loading containers: done."
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.854213853Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.854405276Z" level=info msg="Daemon has completed initialization"
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.877998533Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.878088127Z" level=info msg="API listen on [::]:2376"
	Sep 17 09:27:09 multinode-232000-m03 systemd[1]: Started Docker Application Container Engine.
	Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.933377209Z" level=info msg="Processing signal 'terminated'"
	Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934105331Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934523529Z" level=info msg="Daemon shutdown complete"
	Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934593980Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934602401Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 09:27:10 multinode-232000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 09:27:11 multinode-232000-m03 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 09:27:11 multinode-232000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 09:27:11 multinode-232000-m03 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 09:27:11 multinode-232000-m03 dockerd[873]: time="2024-09-17T09:27:11.969616869Z" level=info msg="Starting up"
	Sep 17 09:28:11 multinode-232000-m03 dockerd[873]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 09:28:11 multinode-232000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 09:28:11 multinode-232000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 09:28:11 multinode-232000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0917 02:28:11.899670    5221 out.go:270] * 
	* 
	W0917 02:28:11.900924    5221 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:28:11.963413    5221 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-232000" : exit status 90
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-232000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-232000 -n multinode-232000
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-232000 logs -n 25: (2.844034893s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                             |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-232000 ssh -n                                                                                                     | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | multinode-232000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-232000 cp multinode-232000-m02:/home/docker/cp-test.txt                                                           | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1710989141/001/cp-test_multinode-232000-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-232000 ssh -n                                                                                                     | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | multinode-232000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-232000 cp multinode-232000-m02:/home/docker/cp-test.txt                                                           | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | multinode-232000:/home/docker/cp-test_multinode-232000-m02_multinode-232000.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-232000 ssh -n                                                                                                     | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | multinode-232000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-232000 ssh -n multinode-232000 sudo cat                                                                           | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | /home/docker/cp-test_multinode-232000-m02_multinode-232000.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-232000 cp multinode-232000-m02:/home/docker/cp-test.txt                                                           | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | multinode-232000-m03:/home/docker/cp-test_multinode-232000-m02_multinode-232000-m03.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-232000 ssh -n                                                                                                     | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | multinode-232000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-232000 ssh -n multinode-232000-m03 sudo cat                                                                       | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | /home/docker/cp-test_multinode-232000-m02_multinode-232000-m03.txt                                                          |                  |         |         |                     |                     |
	| cp      | multinode-232000 cp testdata/cp-test.txt                                                                                    | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | multinode-232000-m03:/home/docker/cp-test.txt                                                                               |                  |         |         |                     |                     |
	| ssh     | multinode-232000 ssh -n                                                                                                     | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | multinode-232000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-232000 cp multinode-232000-m03:/home/docker/cp-test.txt                                                           | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1710989141/001/cp-test_multinode-232000-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-232000 ssh -n                                                                                                     | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | multinode-232000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-232000 cp multinode-232000-m03:/home/docker/cp-test.txt                                                           | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | multinode-232000:/home/docker/cp-test_multinode-232000-m03_multinode-232000.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-232000 ssh -n                                                                                                     | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | multinode-232000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-232000 ssh -n multinode-232000 sudo cat                                                                           | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | /home/docker/cp-test_multinode-232000-m03_multinode-232000.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-232000 cp multinode-232000-m03:/home/docker/cp-test.txt                                                           | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | multinode-232000-m02:/home/docker/cp-test_multinode-232000-m03_multinode-232000-m02.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-232000 ssh -n                                                                                                     | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | multinode-232000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-232000 ssh -n multinode-232000-m02 sudo cat                                                                       | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | /home/docker/cp-test_multinode-232000-m03_multinode-232000-m02.txt                                                          |                  |         |         |                     |                     |
	| node    | multinode-232000 node stop m03                                                                                              | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	| node    | multinode-232000 node start                                                                                                 | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:24 PDT |
	|         | m03 -v=7 --alsologtostderr                                                                                                  |                  |         |         |                     |                     |
	| node    | list -p multinode-232000                                                                                                    | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT |                     |
	| stop    | -p multinode-232000                                                                                                         | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:24 PDT | 17 Sep 24 02:25 PDT |
	| start   | -p multinode-232000                                                                                                         | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:25 PDT |                     |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	| node    | list -p multinode-232000                                                                                                    | multinode-232000 | jenkins | v1.34.0 | 17 Sep 24 02:28 PDT |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 02:25:10
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 02:25:10.966836    5221 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:25:10.967023    5221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:25:10.967029    5221 out.go:358] Setting ErrFile to fd 2...
	I0917 02:25:10.967032    5221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:25:10.967205    5221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:25:10.968581    5221 out.go:352] Setting JSON to false
	I0917 02:25:10.991092    5221 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3280,"bootTime":1726561830,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 02:25:10.991240    5221 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:25:11.013247    5221 out.go:177] * [multinode-232000] minikube v1.34.0 on Darwin 14.6.1
	I0917 02:25:11.062209    5221 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:25:11.062261    5221 notify.go:220] Checking for updates...
	I0917 02:25:11.103634    5221 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:25:11.124715    5221 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 02:25:11.145307    5221 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:25:11.166695    5221 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 02:25:11.187672    5221 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:25:11.209239    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:25:11.209413    5221 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:25:11.210175    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:25:11.210246    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:25:11.219866    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53422
	I0917 02:25:11.220228    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:25:11.220615    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:25:11.220623    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:25:11.220831    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:25:11.220936    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:25:11.249663    5221 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 02:25:11.291450    5221 start.go:297] selected driver: hyperkit
	I0917 02:25:11.291506    5221 start.go:901] validating driver "hyperkit" against &{Name:multinode-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.31.1 ClusterName:multinode-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.16 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:25:11.291714    5221 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:25:11.291864    5221 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:25:11.292007    5221 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19648-1025/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 02:25:11.301121    5221 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 02:25:11.304883    5221 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:25:11.304904    5221 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 02:25:11.307502    5221 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:25:11.307543    5221 cni.go:84] Creating CNI manager for ""
	I0917 02:25:11.307582    5221 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 02:25:11.307654    5221 start.go:340] cluster config:
	{Name:multinode-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-232000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.16 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:25:11.307764    5221 iso.go:125] acquiring lock: {Name:mkc407d80a0f8e78f0e24d63c464bb315c80139b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:25:11.349629    5221 out.go:177] * Starting "multinode-232000" primary control-plane node in "multinode-232000" cluster
	I0917 02:25:11.370307    5221 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:25:11.370365    5221 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 02:25:11.370382    5221 cache.go:56] Caching tarball of preloaded images
	I0917 02:25:11.370551    5221 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:25:11.370565    5221 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:25:11.370702    5221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/config.json ...
	I0917 02:25:11.371399    5221 start.go:360] acquireMachinesLock for multinode-232000: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:25:11.371516    5221 start.go:364] duration metric: took 86.283µs to acquireMachinesLock for "multinode-232000"
	I0917 02:25:11.371547    5221 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:25:11.371561    5221 fix.go:54] fixHost starting: 
	I0917 02:25:11.371905    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:25:11.371930    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:25:11.380462    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53424
	I0917 02:25:11.380782    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:25:11.381229    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:25:11.381250    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:25:11.381460    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:25:11.381636    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:25:11.381749    5221 main.go:141] libmachine: (multinode-232000) Calling .GetState
	I0917 02:25:11.381840    5221 main.go:141] libmachine: (multinode-232000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:25:11.381925    5221 main.go:141] libmachine: (multinode-232000) DBG | hyperkit pid from json: 4780
	I0917 02:25:11.382861    5221 main.go:141] libmachine: (multinode-232000) DBG | hyperkit pid 4780 missing from process table
	I0917 02:25:11.382886    5221 fix.go:112] recreateIfNeeded on multinode-232000: state=Stopped err=<nil>
	I0917 02:25:11.382907    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	W0917 02:25:11.382987    5221 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:25:11.424651    5221 out.go:177] * Restarting existing hyperkit VM for "multinode-232000" ...
	I0917 02:25:11.445479    5221 main.go:141] libmachine: (multinode-232000) Calling .Start
	I0917 02:25:11.445739    5221 main.go:141] libmachine: (multinode-232000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:25:11.445785    5221 main.go:141] libmachine: (multinode-232000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/hyperkit.pid
	I0917 02:25:11.447100    5221 main.go:141] libmachine: (multinode-232000) DBG | hyperkit pid 4780 missing from process table
	I0917 02:25:11.447123    5221 main.go:141] libmachine: (multinode-232000) DBG | pid 4780 is in state "Stopped"
	I0917 02:25:11.447156    5221 main.go:141] libmachine: (multinode-232000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/hyperkit.pid...
	I0917 02:25:11.447293    5221 main.go:141] libmachine: (multinode-232000) DBG | Using UUID 8074f2a2-7362-42ba-b144-29938f44cef0
	I0917 02:25:11.553992    5221 main.go:141] libmachine: (multinode-232000) DBG | Generated MAC 5a:1f:11:e5:b7:54
	I0917 02:25:11.554016    5221 main.go:141] libmachine: (multinode-232000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-232000
	I0917 02:25:11.554135    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8074f2a2-7362-42ba-b144-29938f44cef0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ac9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0917 02:25:11.554162    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8074f2a2-7362-42ba-b144-29938f44cef0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ac9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0917 02:25:11.554248    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8074f2a2-7362-42ba-b144-29938f44cef0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/multinode-232000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/bzimage,/Users/jenkins/minikube-integration/1964
8-1025/.minikube/machines/multinode-232000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-232000"}
	I0917 02:25:11.554284    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8074f2a2-7362-42ba-b144-29938f44cef0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/multinode-232000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-232000"
	I0917 02:25:11.554293    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:25:11.555778    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 DEBUG: hyperkit: Pid is 5233
	I0917 02:25:11.556244    5221 main.go:141] libmachine: (multinode-232000) DBG | Attempt 0
	I0917 02:25:11.556266    5221 main.go:141] libmachine: (multinode-232000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:25:11.556339    5221 main.go:141] libmachine: (multinode-232000) DBG | hyperkit pid from json: 5233
	I0917 02:25:11.558031    5221 main.go:141] libmachine: (multinode-232000) DBG | Searching for 5a:1f:11:e5:b7:54 in /var/db/dhcpd_leases ...
	I0917 02:25:11.558105    5221 main.go:141] libmachine: (multinode-232000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0917 02:25:11.558119    5221 main.go:141] libmachine: (multinode-232000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66e94ae5}
	I0917 02:25:11.558131    5221 main.go:141] libmachine: (multinode-232000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9bd8}
	I0917 02:25:11.558140    5221 main.go:141] libmachine: (multinode-232000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9b9c}
	I0917 02:25:11.558147    5221 main.go:141] libmachine: (multinode-232000) DBG | Found match: 5a:1f:11:e5:b7:54
	I0917 02:25:11.558151    5221 main.go:141] libmachine: (multinode-232000) DBG | IP: 192.169.0.14
	I0917 02:25:11.558204    5221 main.go:141] libmachine: (multinode-232000) Calling .GetConfigRaw
	I0917 02:25:11.558782    5221 main.go:141] libmachine: (multinode-232000) Calling .GetIP
	I0917 02:25:11.558959    5221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/config.json ...
	I0917 02:25:11.559327    5221 machine.go:93] provisionDockerMachine start ...
	I0917 02:25:11.559338    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:25:11.559483    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:11.559597    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:11.559691    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:11.559795    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:11.559920    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:11.560074    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:25:11.560354    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0917 02:25:11.560367    5221 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:25:11.563754    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:25:11.616965    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:25:11.617672    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:25:11.617692    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:25:11.617710    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:25:11.617722    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:25:11.999536    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:25:11.999551    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:25:12.114206    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:25:12.114221    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:25:12.114232    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:25:12.114241    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:25:12.115134    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:25:12.115147    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:25:17.701802    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:17 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:25:17.701861    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:17 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:25:17.701870    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:17 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:25:17.725838    5221 main.go:141] libmachine: (multinode-232000) DBG | 2024/09/17 02:25:17 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:25:46.633605    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:25:46.633620    5221 main.go:141] libmachine: (multinode-232000) Calling .GetMachineName
	I0917 02:25:46.633767    5221 buildroot.go:166] provisioning hostname "multinode-232000"
	I0917 02:25:46.633779    5221 main.go:141] libmachine: (multinode-232000) Calling .GetMachineName
	I0917 02:25:46.633891    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:46.633980    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:46.634090    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:46.634178    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:46.634291    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:46.634444    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:25:46.634591    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0917 02:25:46.634599    5221 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-232000 && echo "multinode-232000" | sudo tee /etc/hostname
	I0917 02:25:46.702276    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-232000
	
	I0917 02:25:46.702294    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:46.702424    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:46.702528    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:46.702615    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:46.702704    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:46.702841    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:25:46.702983    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0917 02:25:46.702994    5221 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-232000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-232000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-232000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:25:46.767411    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:25:46.767432    5221 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:25:46.767453    5221 buildroot.go:174] setting up certificates
	I0917 02:25:46.767460    5221 provision.go:84] configureAuth start
	I0917 02:25:46.767485    5221 main.go:141] libmachine: (multinode-232000) Calling .GetMachineName
	I0917 02:25:46.767628    5221 main.go:141] libmachine: (multinode-232000) Calling .GetIP
	I0917 02:25:46.767755    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:46.767837    5221 provision.go:143] copyHostCerts
	I0917 02:25:46.767869    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:25:46.767938    5221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:25:46.767946    5221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:25:46.768090    5221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:25:46.768309    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:25:46.768354    5221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:25:46.768359    5221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:25:46.768436    5221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:25:46.768577    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:25:46.768614    5221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:25:46.768619    5221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:25:46.768694    5221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:25:46.768828    5221 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.multinode-232000 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-232000]
	I0917 02:25:46.944935    5221 provision.go:177] copyRemoteCerts
	I0917 02:25:46.944993    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:25:46.945011    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:46.945139    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:46.945235    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:46.945321    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:46.945415    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/id_rsa Username:docker}
	I0917 02:25:46.983014    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:25:46.983083    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:25:47.002033    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:25:47.002091    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0917 02:25:47.020618    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:25:47.020696    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:25:47.039960    5221 provision.go:87] duration metric: took 272.472463ms to configureAuth
	I0917 02:25:47.039975    5221 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:25:47.040145    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:25:47.040159    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:25:47.040295    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:47.040391    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:47.040474    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:47.040549    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:47.040628    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:47.040745    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:25:47.040871    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0917 02:25:47.040878    5221 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:25:47.099959    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:25:47.099972    5221 buildroot.go:70] root file system type: tmpfs
	I0917 02:25:47.100043    5221 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:25:47.100059    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:47.100186    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:47.100273    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:47.100358    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:47.100447    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:47.100591    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:25:47.100732    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0917 02:25:47.100773    5221 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:25:47.168880    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:25:47.168903    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:47.169036    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:47.169129    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:47.169224    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:47.169315    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:47.169456    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:25:47.169611    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0917 02:25:47.169623    5221 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:25:48.818488    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:25:48.818504    5221 machine.go:96] duration metric: took 37.258998272s to provisionDockerMachine
	I0917 02:25:48.818516    5221 start.go:293] postStartSetup for "multinode-232000" (driver="hyperkit")
	I0917 02:25:48.818523    5221 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:25:48.818536    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:25:48.818724    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:25:48.818738    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:48.818837    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:48.818933    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:48.819013    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:48.819106    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/id_rsa Username:docker}
	I0917 02:25:48.856274    5221 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:25:48.859324    5221 command_runner.go:130] > NAME=Buildroot
	I0917 02:25:48.859336    5221 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0917 02:25:48.859343    5221 command_runner.go:130] > ID=buildroot
	I0917 02:25:48.859349    5221 command_runner.go:130] > VERSION_ID=2023.02.9
	I0917 02:25:48.859355    5221 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0917 02:25:48.859449    5221 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:25:48.859461    5221 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:25:48.859554    5221 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:25:48.859741    5221 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:25:48.859747    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:25:48.859958    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:25:48.867363    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:25:48.886814    5221 start.go:296] duration metric: took 68.289508ms for postStartSetup
	I0917 02:25:48.886835    5221 fix.go:56] duration metric: took 37.515109536s for fixHost
	I0917 02:25:48.886846    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:48.886983    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:48.887084    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:48.887176    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:48.887267    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:48.887393    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:25:48.887527    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0917 02:25:48.887534    5221 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:25:48.946757    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726565149.085879655
	
	I0917 02:25:48.946768    5221 fix.go:216] guest clock: 1726565149.085879655
	I0917 02:25:48.946773    5221 fix.go:229] Guest: 2024-09-17 02:25:49.085879655 -0700 PDT Remote: 2024-09-17 02:25:48.886837 -0700 PDT m=+37.955385830 (delta=199.042655ms)
	I0917 02:25:48.946795    5221 fix.go:200] guest clock delta is within tolerance: 199.042655ms
	I0917 02:25:48.946799    5221 start.go:83] releasing machines lock for "multinode-232000", held for 37.575103856s
	I0917 02:25:48.946821    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:25:48.946963    5221 main.go:141] libmachine: (multinode-232000) Calling .GetIP
	I0917 02:25:48.947075    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:25:48.947409    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:25:48.947517    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:25:48.947603    5221 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:25:48.947632    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:48.947664    5221 ssh_runner.go:195] Run: cat /version.json
	I0917 02:25:48.947677    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:25:48.947708    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:48.947770    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:25:48.947790    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:48.947870    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:48.947884    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:25:48.947990    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/id_rsa Username:docker}
	I0917 02:25:48.948007    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:25:48.948092    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/id_rsa Username:docker}
	I0917 02:25:48.978305    5221 command_runner.go:130] > {"iso_version": "v1.34.0-1726415472-19646", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "7dc55c0008a982396eb57879cd4eab23ab96531e"}
	I0917 02:25:48.978483    5221 ssh_runner.go:195] Run: systemctl --version
	I0917 02:25:48.982966    5221 command_runner.go:130] > systemd 252 (252)
	I0917 02:25:48.982985    5221 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0917 02:25:48.983166    5221 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:25:49.036752    5221 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0917 02:25:49.036937    5221 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0917 02:25:49.036978    5221 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:25:49.037083    5221 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:25:49.051276    5221 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0917 02:25:49.051288    5221 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:25:49.051294    5221 start.go:495] detecting cgroup driver to use...
	I0917 02:25:49.051395    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:25:49.066177    5221 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0917 02:25:49.066484    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:25:49.075357    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:25:49.084087    5221 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:25:49.084134    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:25:49.092689    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:25:49.101467    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:25:49.110202    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:25:49.118748    5221 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:25:49.127704    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:25:49.136332    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:25:49.145077    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:25:49.153808    5221 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:25:49.161540    5221 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0917 02:25:49.161787    5221 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:25:49.169757    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:25:49.271229    5221 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:25:49.289689    5221 start.go:495] detecting cgroup driver to use...
	I0917 02:25:49.289782    5221 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:25:49.304789    5221 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0917 02:25:49.304802    5221 command_runner.go:130] > [Unit]
	I0917 02:25:49.304807    5221 command_runner.go:130] > Description=Docker Application Container Engine
	I0917 02:25:49.304812    5221 command_runner.go:130] > Documentation=https://docs.docker.com
	I0917 02:25:49.304816    5221 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0917 02:25:49.304820    5221 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0917 02:25:49.304824    5221 command_runner.go:130] > StartLimitBurst=3
	I0917 02:25:49.304828    5221 command_runner.go:130] > StartLimitIntervalSec=60
	I0917 02:25:49.304831    5221 command_runner.go:130] > [Service]
	I0917 02:25:49.304834    5221 command_runner.go:130] > Type=notify
	I0917 02:25:49.304838    5221 command_runner.go:130] > Restart=on-failure
	I0917 02:25:49.304844    5221 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0917 02:25:49.304856    5221 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0917 02:25:49.304862    5221 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0917 02:25:49.304867    5221 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0917 02:25:49.304873    5221 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0917 02:25:49.304879    5221 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0917 02:25:49.304885    5221 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0917 02:25:49.304893    5221 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0917 02:25:49.304899    5221 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0917 02:25:49.304905    5221 command_runner.go:130] > ExecStart=
	I0917 02:25:49.304917    5221 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0917 02:25:49.304921    5221 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0917 02:25:49.304935    5221 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0917 02:25:49.304941    5221 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0917 02:25:49.304949    5221 command_runner.go:130] > LimitNOFILE=infinity
	I0917 02:25:49.304953    5221 command_runner.go:130] > LimitNPROC=infinity
	I0917 02:25:49.304962    5221 command_runner.go:130] > LimitCORE=infinity
	I0917 02:25:49.304967    5221 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0917 02:25:49.304971    5221 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0917 02:25:49.304974    5221 command_runner.go:130] > TasksMax=infinity
	I0917 02:25:49.304978    5221 command_runner.go:130] > TimeoutStartSec=0
	I0917 02:25:49.304983    5221 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0917 02:25:49.304987    5221 command_runner.go:130] > Delegate=yes
	I0917 02:25:49.304991    5221 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0917 02:25:49.304995    5221 command_runner.go:130] > KillMode=process
	I0917 02:25:49.304998    5221 command_runner.go:130] > [Install]
	I0917 02:25:49.305007    5221 command_runner.go:130] > WantedBy=multi-user.target
	I0917 02:25:49.305089    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:25:49.316713    5221 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:25:49.333410    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:25:49.344788    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:25:49.355660    5221 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:25:49.376347    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:25:49.387282    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:25:49.402080    5221 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0917 02:25:49.402454    5221 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:25:49.405272    5221 command_runner.go:130] > /usr/bin/cri-dockerd
	I0917 02:25:49.405493    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:25:49.412708    5221 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:25:49.426157    5221 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:25:49.525562    5221 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:25:49.626996    5221 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:25:49.627079    5221 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:25:49.641067    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:25:49.732233    5221 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:25:52.043365    5221 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.311103221s)
	I0917 02:25:52.043436    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:25:52.054133    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:25:52.065481    5221 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:25:52.170411    5221 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:25:52.267424    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:25:52.371852    5221 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:25:52.385452    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:25:52.396451    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:25:52.500601    5221 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:25:52.555221    5221 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:25:52.555327    5221 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:25:52.559285    5221 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0917 02:25:52.559296    5221 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0917 02:25:52.559301    5221 command_runner.go:130] > Device: 0,22	Inode: 769         Links: 1
	I0917 02:25:52.559306    5221 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0917 02:25:52.559310    5221 command_runner.go:130] > Access: 2024-09-17 09:25:52.651459721 +0000
	I0917 02:25:52.559324    5221 command_runner.go:130] > Modify: 2024-09-17 09:25:52.651459721 +0000
	I0917 02:25:52.559330    5221 command_runner.go:130] > Change: 2024-09-17 09:25:52.653459677 +0000
	I0917 02:25:52.559333    5221 command_runner.go:130] >  Birth: -
	I0917 02:25:52.559359    5221 start.go:563] Will wait 60s for crictl version
	I0917 02:25:52.559412    5221 ssh_runner.go:195] Run: which crictl
	I0917 02:25:52.562238    5221 command_runner.go:130] > /usr/bin/crictl
	I0917 02:25:52.562381    5221 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:25:52.586241    5221 command_runner.go:130] > Version:  0.1.0
	I0917 02:25:52.586254    5221 command_runner.go:130] > RuntimeName:  docker
	I0917 02:25:52.586274    5221 command_runner.go:130] > RuntimeVersion:  27.2.1
	I0917 02:25:52.586362    5221 command_runner.go:130] > RuntimeApiVersion:  v1
	I0917 02:25:52.587478    5221 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:25:52.587565    5221 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:25:52.602824    5221 command_runner.go:130] > 27.2.1
	I0917 02:25:52.603853    5221 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:25:52.623426    5221 command_runner.go:130] > 27.2.1
	I0917 02:25:52.667611    5221 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:25:52.667657    5221 main.go:141] libmachine: (multinode-232000) Calling .GetIP
	I0917 02:25:52.668059    5221 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:25:52.672597    5221 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:25:52.682201    5221 kubeadm.go:883] updating cluster {Name:multinode-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.31.1 ClusterName:multinode-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.16 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 02:25:52.682298    5221 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:25:52.682365    5221 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:25:52.694663    5221 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0917 02:25:52.694695    5221 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0917 02:25:52.694700    5221 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 02:25:52.694708    5221 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0917 02:25:52.694713    5221 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0917 02:25:52.694717    5221 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0917 02:25:52.694723    5221 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0917 02:25:52.694726    5221 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0917 02:25:52.694730    5221 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:25:52.694734    5221 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0917 02:25:52.695389    5221 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:25:52.695402    5221 docker.go:615] Images already preloaded, skipping extraction
	I0917 02:25:52.695485    5221 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:25:52.708148    5221 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0917 02:25:52.708161    5221 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 02:25:52.708166    5221 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0917 02:25:52.708170    5221 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0917 02:25:52.708173    5221 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0917 02:25:52.708177    5221 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0917 02:25:52.708193    5221 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0917 02:25:52.708199    5221 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0917 02:25:52.708202    5221 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:25:52.708206    5221 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0917 02:25:52.708835    5221 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 02:25:52.708852    5221 cache_images.go:84] Images are preloaded, skipping loading
	I0917 02:25:52.708861    5221 kubeadm.go:934] updating node { 192.169.0.14 8443 v1.31.1 docker true true} ...
	I0917 02:25:52.708941    5221 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-232000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:25:52.709025    5221 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 02:25:52.741672    5221 command_runner.go:130] > cgroupfs
	I0917 02:25:52.742672    5221 cni.go:84] Creating CNI manager for ""
	I0917 02:25:52.742682    5221 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 02:25:52.742698    5221 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 02:25:52.742716    5221 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.14 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-232000 NodeName:multinode-232000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 02:25:52.742802    5221 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-232000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 02:25:52.742876    5221 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:25:52.750643    5221 command_runner.go:130] > kubeadm
	I0917 02:25:52.750652    5221 command_runner.go:130] > kubectl
	I0917 02:25:52.750656    5221 command_runner.go:130] > kubelet
	I0917 02:25:52.750671    5221 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:25:52.750724    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 02:25:52.758162    5221 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0917 02:25:52.771785    5221 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:25:52.784981    5221 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0917 02:25:52.798896    5221 ssh_runner.go:195] Run: grep 192.169.0.14	control-plane.minikube.internal$ /etc/hosts
	I0917 02:25:52.801715    5221 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:25:52.810997    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:25:52.907165    5221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:25:52.922386    5221 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000 for IP: 192.169.0.14
	I0917 02:25:52.922399    5221 certs.go:194] generating shared ca certs ...
	I0917 02:25:52.922409    5221 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:25:52.922601    5221 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:25:52.922675    5221 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:25:52.922690    5221 certs.go:256] generating profile certs ...
	I0917 02:25:52.922796    5221 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/client.key
	I0917 02:25:52.922874    5221 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/apiserver.key.4fa80143
	I0917 02:25:52.922951    5221 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/proxy-client.key
	I0917 02:25:52.922959    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:25:52.922979    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:25:52.922997    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:25:52.923014    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:25:52.923031    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 02:25:52.923065    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 02:25:52.923099    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 02:25:52.923118    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 02:25:52.923222    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:25:52.923266    5221 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:25:52.923275    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:25:52.923306    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:25:52.923335    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:25:52.923361    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:25:52.923424    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:25:52.923461    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:25:52.923481    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:25:52.923497    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:25:52.923958    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:25:52.949310    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:25:52.973114    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:25:52.998495    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:25:53.022314    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 02:25:53.041667    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 02:25:53.060613    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:25:53.079637    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 02:25:53.099094    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:25:53.117840    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:25:53.137106    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:25:53.156395    5221 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 02:25:53.170149    5221 ssh_runner.go:195] Run: openssl version
	I0917 02:25:53.174089    5221 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0917 02:25:53.174290    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:25:53.183220    5221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:25:53.186389    5221 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:25:53.186491    5221 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:25:53.186529    5221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:25:53.190436    5221 command_runner.go:130] > 3ec20f2e
	I0917 02:25:53.190652    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:25:53.199463    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:25:53.208313    5221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:25:53.211525    5221 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:25:53.211736    5221 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:25:53.211781    5221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:25:53.215664    5221 command_runner.go:130] > b5213941
	I0917 02:25:53.215865    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:25:53.224761    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:25:53.233608    5221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:25:53.236787    5221 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:25:53.236965    5221 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:25:53.237013    5221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:25:53.240934    5221 command_runner.go:130] > 51391683
	I0917 02:25:53.241093    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:25:53.250009    5221 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:25:53.253211    5221 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:25:53.253220    5221 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0917 02:25:53.253225    5221 command_runner.go:130] > Device: 253,1	Inode: 1052957     Links: 1
	I0917 02:25:53.253230    5221 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0917 02:25:53.253236    5221 command_runner.go:130] > Access: 2024-09-17 09:21:47.498539680 +0000
	I0917 02:25:53.253240    5221 command_runner.go:130] > Modify: 2024-09-17 09:21:47.498539680 +0000
	I0917 02:25:53.253243    5221 command_runner.go:130] > Change: 2024-09-17 09:21:47.498539680 +0000
	I0917 02:25:53.253248    5221 command_runner.go:130] >  Birth: 2024-09-17 09:21:47.498539680 +0000
	I0917 02:25:53.253359    5221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:25:53.257383    5221 command_runner.go:130] > Certificate will not expire
	I0917 02:25:53.257582    5221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:25:53.261570    5221 command_runner.go:130] > Certificate will not expire
	I0917 02:25:53.261661    5221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:25:53.266080    5221 command_runner.go:130] > Certificate will not expire
	I0917 02:25:53.266153    5221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:25:53.270348    5221 command_runner.go:130] > Certificate will not expire
	I0917 02:25:53.270434    5221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:25:53.274559    5221 command_runner.go:130] > Certificate will not expire
	I0917 02:25:53.274656    5221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:25:53.278684    5221 command_runner.go:130] > Certificate will not expire
	I0917 02:25:53.278858    5221 kubeadm.go:392] StartCluster: {Name:multinode-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:multinode-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.16 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:25:53.278995    5221 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 02:25:53.291333    5221 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 02:25:53.299530    5221 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0917 02:25:53.299542    5221 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0917 02:25:53.299547    5221 command_runner.go:130] > /var/lib/minikube/etcd:
	I0917 02:25:53.299550    5221 command_runner.go:130] > member
	I0917 02:25:53.299588    5221 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 02:25:53.299601    5221 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 02:25:53.299653    5221 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 02:25:53.307627    5221 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:25:53.307940    5221 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-232000" does not appear in /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:25:53.308034    5221 kubeconfig.go:62] /Users/jenkins/minikube-integration/19648-1025/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-232000" cluster setting kubeconfig missing "multinode-232000" context setting]
	I0917 02:25:53.308220    5221 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:25:53.309013    5221 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:25:53.309237    5221 kapi.go:59] client config for multinode-232000: &rest.Config{Host:"https://192.169.0.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x410b720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:25:53.309564    5221 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 02:25:53.309777    5221 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 02:25:53.317711    5221 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.14
	I0917 02:25:53.317731    5221 kubeadm.go:1160] stopping kube-system containers ...
	I0917 02:25:53.317799    5221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 02:25:53.331841    5221 command_runner.go:130] > 8b2f4ea197c5
	I0917 02:25:53.331853    5221 command_runner.go:130] > f7ccad53a257
	I0917 02:25:53.331857    5221 command_runner.go:130] > 64f91acf5d83
	I0917 02:25:53.331860    5221 command_runner.go:130] > 84e22c05755c
	I0917 02:25:53.331863    5221 command_runner.go:130] > 3dc3bd4da839
	I0917 02:25:53.331868    5221 command_runner.go:130] > 96e8ac7b181c
	I0917 02:25:53.331871    5221 command_runner.go:130] > b6a933d5abb7
	I0917 02:25:53.331875    5221 command_runner.go:130] > 90f44d581694
	I0917 02:25:53.331893    5221 command_runner.go:130] > ab8e6362f133
	I0917 02:25:53.331899    5221 command_runner.go:130] > 5db9fa24f683
	I0917 02:25:53.331908    5221 command_runner.go:130] > 8e788bff41ec
	I0917 02:25:53.331911    5221 command_runner.go:130] > ff3a45c5df2e
	I0917 02:25:53.331924    5221 command_runner.go:130] > f9ddf66585b5
	I0917 02:25:53.331929    5221 command_runner.go:130] > 8e04470f77bc
	I0917 02:25:53.331933    5221 command_runner.go:130] > 77ac0fcdf71b
	I0917 02:25:53.331936    5221 command_runner.go:130] > 8998ef0cd2fb
	I0917 02:25:53.331952    5221 docker.go:483] Stopping containers: [8b2f4ea197c5 f7ccad53a257 64f91acf5d83 84e22c05755c 3dc3bd4da839 96e8ac7b181c b6a933d5abb7 90f44d581694 ab8e6362f133 5db9fa24f683 8e788bff41ec ff3a45c5df2e f9ddf66585b5 8e04470f77bc 77ac0fcdf71b 8998ef0cd2fb]
	I0917 02:25:53.332033    5221 ssh_runner.go:195] Run: docker stop 8b2f4ea197c5 f7ccad53a257 64f91acf5d83 84e22c05755c 3dc3bd4da839 96e8ac7b181c b6a933d5abb7 90f44d581694 ab8e6362f133 5db9fa24f683 8e788bff41ec ff3a45c5df2e f9ddf66585b5 8e04470f77bc 77ac0fcdf71b 8998ef0cd2fb
	I0917 02:25:53.346953    5221 command_runner.go:130] > 8b2f4ea197c5
	I0917 02:25:53.346973    5221 command_runner.go:130] > f7ccad53a257
	I0917 02:25:53.346977    5221 command_runner.go:130] > 64f91acf5d83
	I0917 02:25:53.346980    5221 command_runner.go:130] > 84e22c05755c
	I0917 02:25:53.346983    5221 command_runner.go:130] > 3dc3bd4da839
	I0917 02:25:53.346986    5221 command_runner.go:130] > 96e8ac7b181c
	I0917 02:25:53.346989    5221 command_runner.go:130] > b6a933d5abb7
	I0917 02:25:53.346992    5221 command_runner.go:130] > 90f44d581694
	I0917 02:25:53.346995    5221 command_runner.go:130] > ab8e6362f133
	I0917 02:25:53.346999    5221 command_runner.go:130] > 5db9fa24f683
	I0917 02:25:53.347003    5221 command_runner.go:130] > 8e788bff41ec
	I0917 02:25:53.347385    5221 command_runner.go:130] > ff3a45c5df2e
	I0917 02:25:53.347392    5221 command_runner.go:130] > f9ddf66585b5
	I0917 02:25:53.347396    5221 command_runner.go:130] > 8e04470f77bc
	I0917 02:25:53.347572    5221 command_runner.go:130] > 77ac0fcdf71b
	I0917 02:25:53.347579    5221 command_runner.go:130] > 8998ef0cd2fb
	I0917 02:25:53.348813    5221 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 02:25:53.362209    5221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 02:25:53.370338    5221 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0917 02:25:53.370349    5221 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0917 02:25:53.370355    5221 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0917 02:25:53.370361    5221 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 02:25:53.370416    5221 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 02:25:53.370424    5221 kubeadm.go:157] found existing configuration files:
	
	I0917 02:25:53.370486    5221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 02:25:53.378115    5221 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 02:25:53.378130    5221 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 02:25:53.378173    5221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 02:25:53.385977    5221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 02:25:53.393671    5221 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 02:25:53.393699    5221 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 02:25:53.393755    5221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 02:25:53.401670    5221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 02:25:53.409227    5221 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 02:25:53.409251    5221 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 02:25:53.409297    5221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 02:25:53.417259    5221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 02:25:53.424729    5221 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 02:25:53.424749    5221 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 02:25:53.424798    5221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 02:25:53.432828    5221 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 02:25:53.440635    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:25:53.510000    5221 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 02:25:53.510169    5221 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0917 02:25:53.510375    5221 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0917 02:25:53.510537    5221 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 02:25:53.510763    5221 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0917 02:25:53.510941    5221 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0917 02:25:53.511194    5221 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0917 02:25:53.511398    5221 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0917 02:25:53.511569    5221 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0917 02:25:53.511729    5221 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 02:25:53.511890    5221 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 02:25:53.512068    5221 command_runner.go:130] > [certs] Using the existing "sa" key
	I0917 02:25:53.513110    5221 command_runner.go:130] ! W0917 09:25:53.649117    1323 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:53.513128    5221 command_runner.go:130] ! W0917 09:25:53.650342    1323 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:53.513142    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:25:53.549689    5221 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 02:25:53.933539    5221 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 02:25:54.068325    5221 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 02:25:54.205343    5221 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 02:25:54.330285    5221 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 02:25:54.568018    5221 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 02:25:54.570199    5221 command_runner.go:130] ! W0917 09:25:53.690144    1327 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:54.570217    5221 command_runner.go:130] ! W0917 09:25:53.690801    1327 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:54.570234    5221 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.057078246s)
	I0917 02:25:54.570253    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:25:54.620172    5221 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 02:25:54.624895    5221 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 02:25:54.624904    5221 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0917 02:25:54.727485    5221 command_runner.go:130] ! W0917 09:25:54.748587    1332 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:54.727507    5221 command_runner.go:130] ! W0917 09:25:54.749120    1332 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:54.727531    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:25:54.769003    5221 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 02:25:54.769017    5221 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 02:25:54.771732    5221 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 02:25:54.771750    5221 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 02:25:54.779487    5221 command_runner.go:130] ! W0917 09:25:54.910552    1360 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:54.779510    5221 command_runner.go:130] ! W0917 09:25:54.911046    1360 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:54.779524    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:25:54.846913    5221 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 02:25:54.849869    5221 command_runner.go:130] ! W0917 09:25:54.986774    1368 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:54.849893    5221 command_runner.go:130] ! W0917 09:25:54.987728    1368 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:25:54.849929    5221 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:25:54.850003    5221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:25:55.350188    5221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:25:55.851457    5221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:25:55.865476    5221 command_runner.go:130] > 1651
	I0917 02:25:55.865505    5221 api_server.go:72] duration metric: took 1.015586828s to wait for apiserver process to appear ...
	I0917 02:25:55.865512    5221 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:25:55.865528    5221 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0917 02:25:58.289702    5221 api_server.go:279] https://192.169.0.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 02:25:58.289718    5221 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 02:25:58.289726    5221 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0917 02:25:58.322928    5221 api_server.go:279] https://192.169.0.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 02:25:58.322948    5221 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 02:25:58.366073    5221 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0917 02:25:58.372850    5221 api_server.go:279] https://192.169.0.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 02:25:58.372865    5221 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 02:25:58.865828    5221 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0917 02:25:58.870757    5221 api_server.go:279] https://192.169.0.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 02:25:58.870780    5221 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 02:25:59.366312    5221 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0917 02:25:59.370712    5221 api_server.go:279] https://192.169.0.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 02:25:59.370724    5221 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 02:25:59.865828    5221 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0917 02:25:59.869097    5221 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0917 02:25:59.869160    5221 round_trippers.go:463] GET https://192.169.0.14:8443/version
	I0917 02:25:59.869165    5221 round_trippers.go:469] Request Headers:
	I0917 02:25:59.869172    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:25:59.869177    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:25:59.874624    5221 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 02:25:59.874634    5221 round_trippers.go:577] Response Headers:
	I0917 02:25:59.874639    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:25:59.874643    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:25:59.874645    5221 round_trippers.go:580]     Content-Length: 263
	I0917 02:25:59.874648    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:25:59.874651    5221 round_trippers.go:580]     Audit-Id: 05c5c2ea-2c5b-4bba-a27b-2ae34fbcbd06
	I0917 02:25:59.874654    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:25:59.874656    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:25:59.874673    5221 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0917 02:25:59.874718    5221 api_server.go:141] control plane version: v1.31.1
	I0917 02:25:59.874727    5221 api_server.go:131] duration metric: took 4.009192512s to wait for apiserver health ...
	I0917 02:25:59.874742    5221 cni.go:84] Creating CNI manager for ""
	I0917 02:25:59.874746    5221 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 02:25:59.896381    5221 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0917 02:25:59.916935    5221 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0917 02:25:59.920792    5221 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0917 02:25:59.920804    5221 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0917 02:25:59.920811    5221 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0917 02:25:59.920820    5221 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0917 02:25:59.920828    5221 command_runner.go:130] > Access: 2024-09-17 09:25:20.659884632 +0000
	I0917 02:25:59.920834    5221 command_runner.go:130] > Modify: 2024-09-15 21:28:20.000000000 +0000
	I0917 02:25:59.920839    5221 command_runner.go:130] > Change: 2024-09-17 09:25:19.114884636 +0000
	I0917 02:25:59.920842    5221 command_runner.go:130] >  Birth: -
	I0917 02:25:59.921065    5221 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0917 02:25:59.921074    5221 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0917 02:25:59.935098    5221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 02:26:00.271015    5221 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0917 02:26:00.286389    5221 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0917 02:26:00.395234    5221 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0917 02:26:00.455408    5221 command_runner.go:130] > daemonset.apps/kindnet configured
	I0917 02:26:00.456885    5221 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 02:26:00.456935    5221 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 02:26:00.456945    5221 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 02:26:00.456991    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0917 02:26:00.456996    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.457002    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.457007    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.460212    5221 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:26:00.460221    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.460226    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.460229    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.460232    5221 round_trippers.go:580]     Audit-Id: cec1c0fd-eae1-4561-bea4-e1b4450f66f7
	I0917 02:26:00.460235    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.460237    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.460239    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.461130    5221 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"857"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"836","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 89937 chars]
	I0917 02:26:00.464211    5221 system_pods.go:59] 12 kube-system pods found
	I0917 02:26:00.464227    5221 system_pods.go:61] "coredns-7c65d6cfc9-hr8rd" [c990c87f-921e-45ba-845b-499147aaa1f9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 02:26:00.464234    5221 system_pods.go:61] "etcd-multinode-232000" [023b8525-6267-41df-ab63-f9c82adf3da1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 02:26:00.464238    5221 system_pods.go:61] "kindnet-7djsb" [4b28da1f-ce8e-43a9-bda0-e44de7b6d582] Running
	I0917 02:26:00.464241    5221 system_pods.go:61] "kindnet-bz9gj" [42665fdd-c209-43ac-8852-3fd0517abce4] Running
	I0917 02:26:00.464244    5221 system_pods.go:61] "kindnet-fgvhm" [f8fe7dd6-85d9-447e-88f1-d98d354a0802] Running
	I0917 02:26:00.464248    5221 system_pods.go:61] "kube-apiserver-multinode-232000" [4bc0fa4f-4ca7-478d-8b7c-b59c24a56faa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 02:26:00.464253    5221 system_pods.go:61] "kube-controller-manager-multinode-232000" [788e2a30-fcea-4f4c-afc3-52d73d046e1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 02:26:00.464256    5221 system_pods.go:61] "kube-proxy-8fb4t" [e73b5d46-804f-4a13-a286-f0194436c3fc] Running
	I0917 02:26:00.464260    5221 system_pods.go:61] "kube-proxy-9s8zh" [8516d216-3857-4702-9656-97c8c91337fc] Running
	I0917 02:26:00.464262    5221 system_pods.go:61] "kube-proxy-xlb2z" [66e8dada-5a23-453e-ba6e-a9146d3467e7] Running
	I0917 02:26:00.464266    5221 system_pods.go:61] "kube-scheduler-multinode-232000" [a38a42a2-e0f9-4c6e-aa99-8dae3f326090] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 02:26:00.464269    5221 system_pods.go:61] "storage-provisioner" [878f83a8-de4f-48b8-98ac-2d34171091ae] Running
	I0917 02:26:00.464273    5221 system_pods.go:74] duration metric: took 7.380959ms to wait for pod list to return data ...
	I0917 02:26:00.464279    5221 node_conditions.go:102] verifying NodePressure condition ...
	I0917 02:26:00.464319    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes
	I0917 02:26:00.464324    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.464329    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.464333    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.466448    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:00.466458    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.466465    5221 round_trippers.go:580]     Audit-Id: 10a26a59-7740-4dc4-b164-f2eedcd6348d
	I0917 02:26:00.466468    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.466473    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.466477    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.466480    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.466482    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.466610    5221 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"857"},"items":[{"metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"785","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14803 chars]
	I0917 02:26:00.467117    5221 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:26:00.467129    5221 node_conditions.go:123] node cpu capacity is 2
	I0917 02:26:00.467136    5221 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:26:00.467140    5221 node_conditions.go:123] node cpu capacity is 2
	I0917 02:26:00.467143    5221 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:26:00.467147    5221 node_conditions.go:123] node cpu capacity is 2
	I0917 02:26:00.467150    5221 node_conditions.go:105] duration metric: took 2.867049ms to run NodePressure ...
	I0917 02:26:00.467161    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:26:00.568747    5221 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0917 02:26:00.721004    5221 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0917 02:26:00.722034    5221 command_runner.go:130] ! W0917 09:26:00.657631    2181 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:26:00.722054    5221 command_runner.go:130] ! W0917 09:26:00.658264    2181 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 02:26:00.722106    5221 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 02:26:00.722160    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0917 02:26:00.722166    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.722172    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.722176    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.724164    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:00.724175    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.724180    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.724182    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.724185    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.724187    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.724207    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.724215    5221 round_trippers.go:580]     Audit-Id: 18efc7ff-7678-4205-b456-90f2887b9eab
	I0917 02:26:00.724775    5221 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"859"},"items":[{"metadata":{"name":"etcd-multinode-232000","namespace":"kube-system","uid":"023b8525-6267-41df-ab63-f9c82adf3da1","resourceVersion":"831","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"04b75db992fd3241846557ea586378aa","kubernetes.io/config.mirror":"04b75db992fd3241846557ea586378aa","kubernetes.io/config.seen":"2024-09-17T09:21:50.527953777Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 31218 chars]
	I0917 02:26:00.725484    5221 kubeadm.go:739] kubelet initialised
	I0917 02:26:00.725493    5221 kubeadm.go:740] duration metric: took 3.377858ms waiting for restarted kubelet to initialise ...
	I0917 02:26:00.725500    5221 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:26:00.725530    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0917 02:26:00.725535    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.725541    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.725544    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.727271    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:00.727277    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.727282    5221 round_trippers.go:580]     Audit-Id: c3d42fe6-d7d2-44ec-ae2b-ee38c04872a2
	I0917 02:26:00.727285    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.727287    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.727289    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.727291    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.727293    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.728152    5221 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"859"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"836","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 89937 chars]
	I0917 02:26:00.730071    5221 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:00.730106    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hr8rd
	I0917 02:26:00.730111    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.730117    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.730119    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.731250    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:00.731257    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.731261    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.731265    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.731268    5221 round_trippers.go:580]     Audit-Id: a78352ed-26c0-4e81-a8bf-669761c79dd5
	I0917 02:26:00.731271    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.731274    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.731277    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.731408    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"836","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0917 02:26:00.731655    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:00.731662    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.731667    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.731670    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.732820    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:00.732828    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.732834    5221 round_trippers.go:580]     Audit-Id: 364120fb-93d4-4739-ab35-b6388d7029de
	I0917 02:26:00.732840    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.732845    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.732849    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.732853    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.732857    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.732992    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"785","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5299 chars]
	I0917 02:26:00.733169    5221 pod_ready.go:98] node "multinode-232000" hosting pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:00.733179    5221 pod_ready.go:82] duration metric: took 3.099005ms for pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace to be "Ready" ...
	E0917 02:26:00.733185    5221 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-232000" hosting pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:00.733190    5221 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:00.733215    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-232000
	I0917 02:26:00.733220    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.733225    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.733230    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.734322    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:00.734330    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.734334    5221 round_trippers.go:580]     Audit-Id: a274e108-7133-432c-a114-30d2b2440538
	I0917 02:26:00.734339    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.734346    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.734351    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.734355    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.734358    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.734505    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-232000","namespace":"kube-system","uid":"023b8525-6267-41df-ab63-f9c82adf3da1","resourceVersion":"831","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"04b75db992fd3241846557ea586378aa","kubernetes.io/config.mirror":"04b75db992fd3241846557ea586378aa","kubernetes.io/config.seen":"2024-09-17T09:21:50.527953777Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6887 chars]
	I0917 02:26:00.734739    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:00.734746    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.734752    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.734755    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.735830    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:00.735839    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.735846    5221 round_trippers.go:580]     Audit-Id: 295e208e-d170-464f-81af-780e49267dd7
	I0917 02:26:00.735851    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.735855    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.735861    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.735868    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.735872    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.736015    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"785","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5299 chars]
	I0917 02:26:00.736177    5221 pod_ready.go:98] node "multinode-232000" hosting pod "etcd-multinode-232000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:00.736185    5221 pod_ready.go:82] duration metric: took 2.991217ms for pod "etcd-multinode-232000" in "kube-system" namespace to be "Ready" ...
	E0917 02:26:00.736191    5221 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-232000" hosting pod "etcd-multinode-232000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:00.736203    5221 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:00.736229    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-232000
	I0917 02:26:00.736233    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.736238    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.736242    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.737488    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:00.737494    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.737499    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.737507    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.737511    5221 round_trippers.go:580]     Audit-Id: 38f7c539-904b-4bc2-ab57-0e7c28997026
	I0917 02:26:00.737513    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.737516    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.737518    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.737712    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-232000","namespace":"kube-system","uid":"4bc0fa4f-4ca7-478d-8b7c-b59c24a56faa","resourceVersion":"830","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.14:8443","kubernetes.io/config.hash":"67a66a13a29b1cc7d88ed81650cbab1c","kubernetes.io/config.mirror":"67a66a13a29b1cc7d88ed81650cbab1c","kubernetes.io/config.seen":"2024-09-17T09:21:50.527954370Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8135 chars]
	I0917 02:26:00.737930    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:00.737937    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.737942    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.737947    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.739093    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:00.739101    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.739107    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.739112    5221 round_trippers.go:580]     Audit-Id: ad26ed6c-c42e-43ba-9d3b-e6f1982265f9
	I0917 02:26:00.739116    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.739120    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.739123    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.739126    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.739313    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"785","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5299 chars]
	I0917 02:26:00.739475    5221 pod_ready.go:98] node "multinode-232000" hosting pod "kube-apiserver-multinode-232000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:00.739483    5221 pod_ready.go:82] duration metric: took 3.275465ms for pod "kube-apiserver-multinode-232000" in "kube-system" namespace to be "Ready" ...
	E0917 02:26:00.739488    5221 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-232000" hosting pod "kube-apiserver-multinode-232000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:00.739493    5221 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:00.739520    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-232000
	I0917 02:26:00.739525    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.739530    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.739534    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.740750    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:00.740757    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.740761    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:00 GMT
	I0917 02:26:00.740765    5221 round_trippers.go:580]     Audit-Id: 01d33272-6141-4eb7-b512-d6c88b5da131
	I0917 02:26:00.740768    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.740771    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.740773    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.740776    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.740970    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-232000","namespace":"kube-system","uid":"788e2a30-fcea-4f4c-afc3-52d73d046e1d","resourceVersion":"827","creationTimestamp":"2024-09-17T09:21:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70ae9bf162cfffffa860fd43666d4b44","kubernetes.io/config.mirror":"70ae9bf162cfffffa860fd43666d4b44","kubernetes.io/config.seen":"2024-09-17T09:21:55.992286729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7726 chars]
	I0917 02:26:00.859144    5221 request.go:632] Waited for 117.91154ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:00.859210    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:00.859218    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:00.859226    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:00.859231    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:00.863641    5221 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:26:00.863654    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:00.863659    5221 round_trippers.go:580]     Audit-Id: 77a25608-6114-48c9-9634-131e5aa8ab60
	I0917 02:26:00.863662    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:00.863665    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:00.863668    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:00.863686    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:00.863689    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:01 GMT
	I0917 02:26:00.863757    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"785","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5299 chars]
	I0917 02:26:00.863948    5221 pod_ready.go:98] node "multinode-232000" hosting pod "kube-controller-manager-multinode-232000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:00.863959    5221 pod_ready.go:82] duration metric: took 124.46061ms for pod "kube-controller-manager-multinode-232000" in "kube-system" namespace to be "Ready" ...
	E0917 02:26:00.863967    5221 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-232000" hosting pod "kube-controller-manager-multinode-232000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:00.863973    5221 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8fb4t" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:01.057657    5221 request.go:632] Waited for 193.636685ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fb4t
	I0917 02:26:01.057703    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fb4t
	I0917 02:26:01.057722    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:01.057735    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:01.057741    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:01.060635    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:01.060648    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:01.060655    5221 round_trippers.go:580]     Audit-Id: bd584152-cae4-4f0a-af03-4be48c6f706d
	I0917 02:26:01.060658    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:01.060663    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:01.060667    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:01.060670    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:01.060674    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:01 GMT
	I0917 02:26:01.060923    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8fb4t","generateName":"kube-proxy-","namespace":"kube-system","uid":"e73b5d46-804f-4a13-a286-f0194436c3fc","resourceVersion":"516","creationTimestamp":"2024-09-17T09:22:43Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fb6f259a-808b-4cb9-a679-3500f264f1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f259a-808b-4cb9-a679-3500f264f1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0917 02:26:01.258483    5221 request.go:632] Waited for 197.196092ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:01.258590    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:01.258600    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:01.258611    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:01.258621    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:01.261558    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:01.261574    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:01.261581    5221 round_trippers.go:580]     Audit-Id: 23ec325f-145e-4144-8462-c939547787a6
	I0917 02:26:01.261586    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:01.261590    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:01.261593    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:01.261616    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:01.261623    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:01 GMT
	I0917 02:26:01.261757    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"b0d6988f-c01e-465b-b2df-6e79ea652296","resourceVersion":"581","creationTimestamp":"2024-09-17T09:22:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_22_44_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3825 chars]
	I0917 02:26:01.261975    5221 pod_ready.go:93] pod "kube-proxy-8fb4t" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:01.261987    5221 pod_ready.go:82] duration metric: took 398.006781ms for pod "kube-proxy-8fb4t" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:01.261996    5221 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9s8zh" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:01.457792    5221 request.go:632] Waited for 195.747165ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9s8zh
	I0917 02:26:01.457871    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9s8zh
	I0917 02:26:01.457883    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:01.457894    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:01.457902    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:01.460228    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:01.460241    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:01.460247    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:01 GMT
	I0917 02:26:01.460252    5221 round_trippers.go:580]     Audit-Id: 36ac8366-ce7c-4dcb-8007-d7a60e2f53c5
	I0917 02:26:01.460255    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:01.460258    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:01.460261    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:01.460265    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:01.460367    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9s8zh","generateName":"kube-proxy-","namespace":"kube-system","uid":"8516d216-3857-4702-9656-97c8c91337fc","resourceVersion":"854","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fb6f259a-808b-4cb9-a679-3500f264f1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f259a-808b-4cb9-a679-3500f264f1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6394 chars]
	I0917 02:26:01.658875    5221 request.go:632] Waited for 198.175704ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:01.658954    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:01.658960    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:01.658966    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:01.658970    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:01.674149    5221 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0917 02:26:01.674162    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:01.674167    5221 round_trippers.go:580]     Audit-Id: 6de142d5-08f6-4911-ab18-e321199850b4
	I0917 02:26:01.674171    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:01.674174    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:01.674182    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:01.674185    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:01.674189    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:01 GMT
	I0917 02:26:01.679058    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"785","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5299 chars]
	I0917 02:26:01.679257    5221 pod_ready.go:98] node "multinode-232000" hosting pod "kube-proxy-9s8zh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:01.679269    5221 pod_ready.go:82] duration metric: took 417.266115ms for pod "kube-proxy-9s8zh" in "kube-system" namespace to be "Ready" ...
	E0917 02:26:01.679276    5221 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-232000" hosting pod "kube-proxy-9s8zh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:01.679283    5221 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xlb2z" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:01.857364    5221 request.go:632] Waited for 178.043065ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xlb2z
	I0917 02:26:01.857410    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xlb2z
	I0917 02:26:01.857418    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:01.857444    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:01.857450    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:01.859399    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:01.859409    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:01.859414    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:01.859417    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:02 GMT
	I0917 02:26:01.859420    5221 round_trippers.go:580]     Audit-Id: 7ab85c97-7330-473f-9b12-88f73918958c
	I0917 02:26:01.859422    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:01.859426    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:01.859429    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:01.859755    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xlb2z","generateName":"kube-proxy-","namespace":"kube-system","uid":"66e8dada-5a23-453e-ba6e-a9146d3467e7","resourceVersion":"742","creationTimestamp":"2024-09-17T09:23:37Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fb6f259a-808b-4cb9-a679-3500f264f1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:23:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f259a-808b-4cb9-a679-3500f264f1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0917 02:26:02.057154    5221 request.go:632] Waited for 197.135315ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m03
	I0917 02:26:02.057211    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m03
	I0917 02:26:02.057217    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:02.057223    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:02.057227    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:02.062226    5221 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:26:02.062239    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:02.062244    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:02 GMT
	I0917 02:26:02.062247    5221 round_trippers.go:580]     Audit-Id: ed528017-c96f-4b94-af17-c6026481838a
	I0917 02:26:02.062250    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:02.062258    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:02.062262    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:02.062264    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:02.062682    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m03","uid":"ca6d8a0b-78e8-401d-8fd0-21af7b79983d","resourceVersion":"768","creationTimestamp":"2024-09-17T09:24:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_24_31_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:24:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3642 chars]
	I0917 02:26:02.062850    5221 pod_ready.go:93] pod "kube-proxy-xlb2z" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:02.062860    5221 pod_ready.go:82] duration metric: took 383.57032ms for pod "kube-proxy-xlb2z" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:02.062867    5221 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:02.257084    5221 request.go:632] Waited for 194.171971ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-232000
	I0917 02:26:02.257181    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-232000
	I0917 02:26:02.257193    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:02.257205    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:02.257213    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:02.259698    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:02.259711    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:02.259718    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:02 GMT
	I0917 02:26:02.259722    5221 round_trippers.go:580]     Audit-Id: e675a878-7f95-42c4-8341-65277fb467ce
	I0917 02:26:02.259726    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:02.259730    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:02.259733    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:02.259737    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:02.260062    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-232000","namespace":"kube-system","uid":"a38a42a2-e0f9-4c6e-aa99-8dae3f326090","resourceVersion":"828","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6ab4348c7fba41d6fa49c901ef5e8acf","kubernetes.io/config.mirror":"6ab4348c7fba41d6fa49c901ef5e8acf","kubernetes.io/config.seen":"2024-09-17T09:21:50.527953069Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5438 chars]
	I0917 02:26:02.457926    5221 request.go:632] Waited for 197.630012ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:02.458000    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:02.458007    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:02.458025    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:02.458030    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:02.460577    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:02.460587    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:02.460593    5221 round_trippers.go:580]     Audit-Id: e88ee0d3-2228-484e-b323-73d11157d0ad
	I0917 02:26:02.460624    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:02.460630    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:02.460632    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:02.460634    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:02.460637    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:02 GMT
	I0917 02:26:02.460715    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:02.460912    5221 pod_ready.go:98] node "multinode-232000" hosting pod "kube-scheduler-multinode-232000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:02.460923    5221 pod_ready.go:82] duration metric: took 398.05006ms for pod "kube-scheduler-multinode-232000" in "kube-system" namespace to be "Ready" ...
	E0917 02:26:02.460930    5221 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-232000" hosting pod "kube-scheduler-multinode-232000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000" has status "Ready":"False"
	I0917 02:26:02.460937    5221 pod_ready.go:39] duration metric: took 1.735423244s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:26:02.460951    5221 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 02:26:02.472005    5221 command_runner.go:130] > -16
	I0917 02:26:02.472030    5221 ops.go:34] apiserver oom_adj: -16
	I0917 02:26:02.472035    5221 kubeadm.go:597] duration metric: took 9.172389361s to restartPrimaryControlPlane
	I0917 02:26:02.472040    5221 kubeadm.go:394] duration metric: took 9.193145452s to StartCluster
	I0917 02:26:02.472051    5221 settings.go:142] acquiring lock: {Name:mk5de70c670a23cf49fdbd19ddb5c1d2a9de9a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:26:02.472140    5221 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:26:02.472472    5221 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1025/kubeconfig: {Name:mk89516924578dcccd5bbd950d94f0ba54499729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:26:02.472755    5221 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:26:02.472772    5221 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 02:26:02.472880    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:26:02.530468    5221 out.go:177] * Verifying Kubernetes components...
	I0917 02:26:02.572271    5221 out.go:177] * Enabled addons: 
	I0917 02:26:02.593371    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:26:02.613940    5221 addons.go:510] duration metric: took 141.175129ms for enable addons: enabled=[]
	I0917 02:26:02.733145    5221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:26:02.746052    5221 node_ready.go:35] waiting up to 6m0s for node "multinode-232000" to be "Ready" ...
	I0917 02:26:02.746107    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:02.746112    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:02.746118    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:02.746121    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:02.747828    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:02.747837    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:02.747842    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:02.747845    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:02.747847    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:02.747850    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:02.747855    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:02 GMT
	I0917 02:26:02.747858    5221 round_trippers.go:580]     Audit-Id: d5496d68-1a64-404a-8d6c-9b7cc23eab7d
	I0917 02:26:02.748129    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:03.246400    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:03.246425    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:03.246436    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:03.246443    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:03.248831    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:03.248843    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:03.248849    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:03.248853    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:03 GMT
	I0917 02:26:03.248872    5221 round_trippers.go:580]     Audit-Id: d5465bc4-df8d-4c46-ae7c-3c5669cf489d
	I0917 02:26:03.248883    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:03.248892    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:03.248899    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:03.249260    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:03.746905    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:03.746926    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:03.746937    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:03.746943    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:03.749584    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:03.749599    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:03.749606    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:03.749610    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:03.749614    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:03.749618    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:03 GMT
	I0917 02:26:03.749623    5221 round_trippers.go:580]     Audit-Id: 0de3ccd8-6107-4820-bf38-1d95edc7f688
	I0917 02:26:03.749629    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:03.749737    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:04.247213    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:04.247255    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:04.247266    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:04.247271    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:04.249600    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:04.249612    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:04.249619    5221 round_trippers.go:580]     Audit-Id: 77757cd5-1fdb-49a4-af4b-f47aecf7626b
	I0917 02:26:04.249622    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:04.249625    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:04.249629    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:04.249631    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:04.249634    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:04 GMT
	I0917 02:26:04.249777    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:04.746521    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:04.746548    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:04.746559    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:04.746565    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:04.749018    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:04.749033    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:04.749040    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:04.749047    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:04 GMT
	I0917 02:26:04.749051    5221 round_trippers.go:580]     Audit-Id: 6310c865-04f8-461f-b9f3-df4feda0be92
	I0917 02:26:04.749057    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:04.749061    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:04.749064    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:04.749233    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:04.749483    5221 node_ready.go:53] node "multinode-232000" has status "Ready":"False"
	I0917 02:26:05.247645    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:05.247665    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:05.247676    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:05.247686    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:05.250631    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:05.250645    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:05.250652    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:05.250658    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:05 GMT
	I0917 02:26:05.250665    5221 round_trippers.go:580]     Audit-Id: bf84e072-c3a4-434c-8a99-ca902a8cd4fa
	I0917 02:26:05.250671    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:05.250675    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:05.250681    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:05.251112    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:05.746715    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:05.746739    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:05.746750    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:05.746758    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:05.749253    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:05.749274    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:05.749285    5221 round_trippers.go:580]     Audit-Id: 031b803f-ecc3-4ecc-a94a-2f1be6a17281
	I0917 02:26:05.749295    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:05.749301    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:05.749306    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:05.749314    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:05.749318    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:05 GMT
	I0917 02:26:05.749510    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:06.248253    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:06.248270    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:06.248278    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:06.248282    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:06.249998    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:06.250006    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:06.250011    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:06.250014    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:06 GMT
	I0917 02:26:06.250016    5221 round_trippers.go:580]     Audit-Id: 5162bda8-72e3-4622-af16-9d414585fc88
	I0917 02:26:06.250019    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:06.250022    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:06.250024    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:06.250155    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:06.748270    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:06.748295    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:06.748335    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:06.748342    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:06.751307    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:06.751323    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:06.751331    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:06.751334    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:06.751339    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:06 GMT
	I0917 02:26:06.751344    5221 round_trippers.go:580]     Audit-Id: 1e3a75d5-822f-4917-9c8c-59e939389560
	I0917 02:26:06.751348    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:06.751351    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:06.751441    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:06.751700    5221 node_ready.go:53] node "multinode-232000" has status "Ready":"False"
	I0917 02:26:07.247009    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:07.247024    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:07.247032    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:07.247035    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:07.248816    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:07.248829    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:07.248836    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:07.248840    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:07.248843    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:07 GMT
	I0917 02:26:07.248846    5221 round_trippers.go:580]     Audit-Id: 11fd5706-ef83-43d5-bad1-04e231326bf5
	I0917 02:26:07.248849    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:07.248853    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:07.249074    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:07.746752    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:07.746777    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:07.746789    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:07.746794    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:07.749614    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:07.749631    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:07.749638    5221 round_trippers.go:580]     Audit-Id: 358e443a-a491-4fa7-a31f-e9a538bc2208
	I0917 02:26:07.749644    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:07.749648    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:07.749654    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:07.749659    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:07.749663    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:07 GMT
	I0917 02:26:07.749878    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:08.247780    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:08.247803    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:08.247812    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:08.247820    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:08.250152    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:08.250165    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:08.250171    5221 round_trippers.go:580]     Audit-Id: 6f74250f-c5b6-4af8-89db-d7aecbd6a52f
	I0917 02:26:08.250176    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:08.250179    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:08.250183    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:08.250201    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:08.250206    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:08 GMT
	I0917 02:26:08.250582    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:08.748333    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:08.748363    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:08.748375    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:08.748380    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:08.751003    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:08.751019    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:08.751026    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:08.751030    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:08 GMT
	I0917 02:26:08.751033    5221 round_trippers.go:580]     Audit-Id: 942c4a5b-058d-4147-b553-8bfe7917e2d1
	I0917 02:26:08.751037    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:08.751042    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:08.751045    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:08.751315    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:09.247719    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:09.247742    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:09.247755    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:09.247761    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:09.250302    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:09.250352    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:09.250362    5221 round_trippers.go:580]     Audit-Id: d9fb6f04-f825-4b7a-9c00-4563ab889066
	I0917 02:26:09.250366    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:09.250369    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:09.250373    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:09.250378    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:09.250384    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:09 GMT
	I0917 02:26:09.250531    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:09.250783    5221 node_ready.go:53] node "multinode-232000" has status "Ready":"False"
	I0917 02:26:09.748371    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:09.748395    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:09.748409    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:09.748418    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:09.751179    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:09.751193    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:09.751200    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:09.751203    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:09 GMT
	I0917 02:26:09.751208    5221 round_trippers.go:580]     Audit-Id: 02426122-cb5d-4703-8836-c7e9379e2552
	I0917 02:26:09.751212    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:09.751237    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:09.751243    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:09.751309    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:10.247062    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:10.247089    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:10.247102    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:10.247107    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:10.249677    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:10.249690    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:10.249695    5221 round_trippers.go:580]     Audit-Id: d774fdc1-907c-4c0f-93f2-4826af2c3275
	I0917 02:26:10.249700    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:10.249704    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:10.249707    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:10.249712    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:10.249715    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:10 GMT
	I0917 02:26:10.249793    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:10.747093    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:10.747117    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:10.747129    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:10.747134    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:10.749952    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:10.749966    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:10.749974    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:10 GMT
	I0917 02:26:10.749979    5221 round_trippers.go:580]     Audit-Id: 4ed5f44c-a575-41f3-ad51-29a6b033237a
	I0917 02:26:10.749982    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:10.749984    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:10.749987    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:10.749991    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:10.750065    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:11.247210    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:11.247229    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:11.247239    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:11.247243    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:11.251635    5221 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:26:11.251650    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:11.251655    5221 round_trippers.go:580]     Audit-Id: c47fce14-9a86-4b1c-bd1c-c537454756cf
	I0917 02:26:11.251658    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:11.251660    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:11.251662    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:11.251665    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:11.251667    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:11 GMT
	I0917 02:26:11.251735    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:11.251930    5221 node_ready.go:53] node "multinode-232000" has status "Ready":"False"
	I0917 02:26:11.746646    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:11.746671    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:11.746683    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:11.746690    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:11.749441    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:11.749453    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:11.749458    5221 round_trippers.go:580]     Audit-Id: 00ce8c37-8d96-44ed-9750-000be77865e3
	I0917 02:26:11.749461    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:11.749464    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:11.749467    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:11.749470    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:11.749472    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:11 GMT
	I0917 02:26:11.749559    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:12.247113    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:12.247126    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:12.247133    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:12.247137    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:12.250221    5221 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:26:12.250233    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:12.250253    5221 round_trippers.go:580]     Audit-Id: 1ed3c433-146a-4820-afa8-cf9eb4213a58
	I0917 02:26:12.250258    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:12.250261    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:12.250264    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:12.250268    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:12.250275    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:12 GMT
	I0917 02:26:12.250407    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"875","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5515 chars]
	I0917 02:26:12.747129    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:12.747153    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:12.747165    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:12.747171    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:12.749967    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:12.749994    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:12.750002    5221 round_trippers.go:580]     Audit-Id: 877c75f0-fef6-492f-a4e9-0149d136f58b
	I0917 02:26:12.750008    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:12.750011    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:12.750014    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:12.750017    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:12.750020    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:12 GMT
	I0917 02:26:12.750172    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:12.750420    5221 node_ready.go:49] node "multinode-232000" has status "Ready":"True"
	I0917 02:26:12.750436    5221 node_ready.go:38] duration metric: took 10.00431778s for node "multinode-232000" to be "Ready" ...
	I0917 02:26:12.750444    5221 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:26:12.750490    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0917 02:26:12.750497    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:12.750504    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:12.750509    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:12.753310    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:12.753331    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:12.753344    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:12.753350    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:12.753357    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:12 GMT
	I0917 02:26:12.753360    5221 round_trippers.go:580]     Audit-Id: 9e476eb0-3edc-440e-8a01-9f661f7aa4f5
	I0917 02:26:12.753367    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:12.753371    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:12.754053    5221 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"912"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"836","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 89225 chars]
	I0917 02:26:12.755941    5221 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:12.755980    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hr8rd
	I0917 02:26:12.755985    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:12.755991    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:12.755995    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:12.757082    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:12.757091    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:12.757099    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:12 GMT
	I0917 02:26:12.757104    5221 round_trippers.go:580]     Audit-Id: 51e0aced-52b1-45a1-8302-f1ebe70f0df4
	I0917 02:26:12.757107    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:12.757110    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:12.757114    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:12.757117    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:12.757234    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"836","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0917 02:26:12.757489    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:12.757496    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:12.757501    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:12.757504    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:12.758440    5221 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 02:26:12.758449    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:12.758454    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:12 GMT
	I0917 02:26:12.758457    5221 round_trippers.go:580]     Audit-Id: 8a7645d5-3390-4ab1-9a1d-0fafe92e8c98
	I0917 02:26:12.758460    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:12.758463    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:12.758466    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:12.758468    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:12.758628    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:13.257052    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hr8rd
	I0917 02:26:13.257080    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:13.257092    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:13.257097    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:13.259589    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:13.259602    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:13.259609    5221 round_trippers.go:580]     Audit-Id: 9cf005fe-53ba-4797-860e-75eaa0353a49
	I0917 02:26:13.259613    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:13.259617    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:13.259621    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:13.259624    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:13.259627    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:13 GMT
	I0917 02:26:13.259790    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"836","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0917 02:26:13.260168    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:13.260178    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:13.260185    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:13.260190    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:13.261811    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:13.261819    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:13.261824    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:13.261827    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:13.261831    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:13.261834    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:13 GMT
	I0917 02:26:13.261836    5221 round_trippers.go:580]     Audit-Id: 6765ccb0-a0e2-4d7e-9d32-1b342f197114
	I0917 02:26:13.261840    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:13.261894    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:13.756286    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hr8rd
	I0917 02:26:13.756338    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:13.756351    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:13.756357    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:13.758956    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:13.758971    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:13.758980    5221 round_trippers.go:580]     Audit-Id: 52871bc1-08f7-4ce3-8e68-c20630b80256
	I0917 02:26:13.758984    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:13.758988    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:13.758994    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:13.758997    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:13.759000    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:13 GMT
	I0917 02:26:13.759205    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"836","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0917 02:26:13.759582    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:13.759592    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:13.759600    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:13.759604    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:13.760956    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:13.760963    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:13.760968    5221 round_trippers.go:580]     Audit-Id: 92042065-ee16-4db0-bef2-6b45ad14a8c6
	I0917 02:26:13.760971    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:13.760974    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:13.760978    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:13.760984    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:13.760987    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:13 GMT
	I0917 02:26:13.761154    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:14.256161    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hr8rd
	I0917 02:26:14.256204    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:14.256211    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:14.256216    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:14.257917    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:14.257932    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:14.257940    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:14.257945    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:14.257951    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:14.257955    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:14 GMT
	I0917 02:26:14.257962    5221 round_trippers.go:580]     Audit-Id: f289b32a-0f9c-4903-b0d5-985a11f8c99d
	I0917 02:26:14.257976    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:14.258044    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"836","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0917 02:26:14.258324    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:14.258332    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:14.258337    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:14.258340    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:14.259811    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:14.259821    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:14.259829    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:14.259833    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:14.259836    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:14.259839    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:14 GMT
	I0917 02:26:14.259844    5221 round_trippers.go:580]     Audit-Id: 8b5bd704-6a01-49ac-8aec-021470923829
	I0917 02:26:14.259846    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:14.260106    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:14.756244    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hr8rd
	I0917 02:26:14.756260    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:14.756267    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:14.756270    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:14.758565    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:14.758576    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:14.758581    5221 round_trippers.go:580]     Audit-Id: 9f168eff-c34d-4067-9d03-893685573ae1
	I0917 02:26:14.758586    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:14.758589    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:14.758592    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:14.758594    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:14.758597    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:14 GMT
	I0917 02:26:14.758648    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"836","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0917 02:26:14.758929    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:14.758936    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:14.758941    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:14.758945    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:14.760978    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:14.760986    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:14.760992    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:14 GMT
	I0917 02:26:14.760995    5221 round_trippers.go:580]     Audit-Id: 0bd0bc85-4f39-4866-9d95-ea6740130808
	I0917 02:26:14.760999    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:14.761004    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:14.761008    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:14.761012    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:14.761299    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:14.761478    5221 pod_ready.go:103] pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace has status "Ready":"False"
	I0917 02:26:15.256937    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hr8rd
	I0917 02:26:15.256978    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.257003    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.257007    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.258811    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.258821    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.258831    5221 round_trippers.go:580]     Audit-Id: 30550394-f485-48b0-8ebf-2f4638f3cea2
	I0917 02:26:15.258836    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.258841    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.258844    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.258847    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.258849    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.259083    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"926","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7039 chars]
	I0917 02:26:15.259362    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:15.259369    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.259375    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.259380    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.260500    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.260510    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.260515    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.260518    5221 round_trippers.go:580]     Audit-Id: 65b980fb-bf64-4c00-8aef-e0807c6a7f9a
	I0917 02:26:15.260523    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.260530    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.260534    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.260537    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.260857    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:15.261030    5221 pod_ready.go:93] pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:15.261041    5221 pod_ready.go:82] duration metric: took 2.505078533s for pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.261047    5221 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.261077    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-232000
	I0917 02:26:15.261081    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.261087    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.261091    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.262137    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.262145    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.262150    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.262154    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.262157    5221 round_trippers.go:580]     Audit-Id: 0ae9c7df-0193-4496-a3d2-6560286b49de
	I0917 02:26:15.262160    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.262165    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.262170    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.262380    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-232000","namespace":"kube-system","uid":"023b8525-6267-41df-ab63-f9c82adf3da1","resourceVersion":"895","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"04b75db992fd3241846557ea586378aa","kubernetes.io/config.mirror":"04b75db992fd3241846557ea586378aa","kubernetes.io/config.seen":"2024-09-17T09:21:50.527953777Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6663 chars]
	I0917 02:26:15.262637    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:15.262643    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.262648    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.262650    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.263744    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.263751    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.263756    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.263774    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.263801    5221 round_trippers.go:580]     Audit-Id: 3927b9dc-726b-4707-ab34-f0009b4d0af8
	I0917 02:26:15.263818    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.263824    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.263827    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.263933    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:15.264109    5221 pod_ready.go:93] pod "etcd-multinode-232000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:15.264117    5221 pod_ready.go:82] duration metric: took 3.06514ms for pod "etcd-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.264127    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.264161    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-232000
	I0917 02:26:15.264169    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.264175    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.264179    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.265288    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.265294    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.265298    5221 round_trippers.go:580]     Audit-Id: ec52314f-26b5-4c04-bdfe-3b0687b140f0
	I0917 02:26:15.265302    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.265304    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.265306    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.265315    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.265319    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.265747    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-232000","namespace":"kube-system","uid":"4bc0fa4f-4ca7-478d-8b7c-b59c24a56faa","resourceVersion":"899","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.14:8443","kubernetes.io/config.hash":"67a66a13a29b1cc7d88ed81650cbab1c","kubernetes.io/config.mirror":"67a66a13a29b1cc7d88ed81650cbab1c","kubernetes.io/config.seen":"2024-09-17T09:21:50.527954370Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0917 02:26:15.265971    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:15.265978    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.265983    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.265987    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.267067    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.267075    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.267080    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.267084    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.267088    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.267092    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.267097    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.267100    5221 round_trippers.go:580]     Audit-Id: 4d66ba77-7146-4f94-aa26-743d95b6c06e
	I0917 02:26:15.267327    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:15.267493    5221 pod_ready.go:93] pod "kube-apiserver-multinode-232000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:15.267501    5221 pod_ready.go:82] duration metric: took 3.369149ms for pod "kube-apiserver-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.267507    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.267534    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-232000
	I0917 02:26:15.267538    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.267544    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.267549    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.268681    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.268695    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.268700    5221 round_trippers.go:580]     Audit-Id: fea38617-f182-401b-84f4-164f6524b857
	I0917 02:26:15.268703    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.268706    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.268709    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.268712    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.268715    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.268992    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-232000","namespace":"kube-system","uid":"788e2a30-fcea-4f4c-afc3-52d73d046e1d","resourceVersion":"914","creationTimestamp":"2024-09-17T09:21:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70ae9bf162cfffffa860fd43666d4b44","kubernetes.io/config.mirror":"70ae9bf162cfffffa860fd43666d4b44","kubernetes.io/config.seen":"2024-09-17T09:21:55.992286729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0917 02:26:15.269224    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:15.269232    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.269238    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.269243    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.270433    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.270441    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.270446    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.270449    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.270452    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.270455    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.270458    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.270466    5221 round_trippers.go:580]     Audit-Id: 59387e12-6058-46f1-855d-444750a41c7a
	I0917 02:26:15.271323    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:15.271536    5221 pod_ready.go:93] pod "kube-controller-manager-multinode-232000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:15.271544    5221 pod_ready.go:82] duration metric: took 4.032939ms for pod "kube-controller-manager-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.271557    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8fb4t" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.271591    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fb4t
	I0917 02:26:15.271596    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.271602    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.271605    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.272726    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.272734    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.272741    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.272745    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.272749    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.272760    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.272763    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.272765    5221 round_trippers.go:580]     Audit-Id: fc25355a-be45-4ca4-951c-7d819f14f6a4
	I0917 02:26:15.273037    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8fb4t","generateName":"kube-proxy-","namespace":"kube-system","uid":"e73b5d46-804f-4a13-a286-f0194436c3fc","resourceVersion":"516","creationTimestamp":"2024-09-17T09:22:43Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fb6f259a-808b-4cb9-a679-3500f264f1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f259a-808b-4cb9-a679-3500f264f1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0917 02:26:15.273266    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:15.273273    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.273279    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.273282    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.274345    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.274352    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.274356    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.274360    5221 round_trippers.go:580]     Audit-Id: 2eacf506-e644-4690-a2bd-26023f8af311
	I0917 02:26:15.274364    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.274366    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.274369    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.274371    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.274601    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"b0d6988f-c01e-465b-b2df-6e79ea652296","resourceVersion":"581","creationTimestamp":"2024-09-17T09:22:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_22_44_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3825 chars]
	I0917 02:26:15.274749    5221 pod_ready.go:93] pod "kube-proxy-8fb4t" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:15.274756    5221 pod_ready.go:82] duration metric: took 3.194435ms for pod "kube-proxy-8fb4t" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.274762    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9s8zh" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.457062    5221 request.go:632] Waited for 182.2569ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9s8zh
	I0917 02:26:15.457120    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9s8zh
	I0917 02:26:15.457129    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.457135    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.457139    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.459245    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:15.459258    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.459263    5221 round_trippers.go:580]     Audit-Id: ec6babfe-e5dc-4651-aa2b-8f5967f72bb9
	I0917 02:26:15.459266    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.459269    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.459272    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.459303    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.459308    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.459630    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9s8zh","generateName":"kube-proxy-","namespace":"kube-system","uid":"8516d216-3857-4702-9656-97c8c91337fc","resourceVersion":"890","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fb6f259a-808b-4cb9-a679-3500f264f1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f259a-808b-4cb9-a679-3500f264f1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6394 chars]
	I0917 02:26:15.659038    5221 request.go:632] Waited for 199.111398ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:15.659171    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:15.659181    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.659192    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.659201    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.662024    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:15.662041    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.662049    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.662053    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.662058    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.662062    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.662066    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:15 GMT
	I0917 02:26:15.662070    5221 round_trippers.go:580]     Audit-Id: 704458b4-8175-4324-abed-2c9fda237785
	I0917 02:26:15.662140    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:15.662396    5221 pod_ready.go:93] pod "kube-proxy-9s8zh" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:15.662410    5221 pod_ready.go:82] duration metric: took 387.641161ms for pod "kube-proxy-9s8zh" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.662418    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xlb2z" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:15.858007    5221 request.go:632] Waited for 195.547162ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xlb2z
	I0917 02:26:15.858063    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xlb2z
	I0917 02:26:15.858069    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:15.858078    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:15.858082    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:15.859988    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:15.859998    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:15.860006    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:15.860013    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:15.860022    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:16 GMT
	I0917 02:26:15.860027    5221 round_trippers.go:580]     Audit-Id: 782f28e5-b7a4-41cc-933f-3db4b4f7cb50
	I0917 02:26:15.860031    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:15.860034    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:15.860278    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xlb2z","generateName":"kube-proxy-","namespace":"kube-system","uid":"66e8dada-5a23-453e-ba6e-a9146d3467e7","resourceVersion":"742","creationTimestamp":"2024-09-17T09:23:37Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fb6f259a-808b-4cb9-a679-3500f264f1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:23:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f259a-808b-4cb9-a679-3500f264f1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0917 02:26:16.057101    5221 request.go:632] Waited for 196.48733ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m03
	I0917 02:26:16.057165    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m03
	I0917 02:26:16.057172    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:16.057178    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:16.057182    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:16.058828    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:16.058839    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:16.058845    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:16.058847    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:16.058850    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:16.058853    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:16.058856    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:16 GMT
	I0917 02:26:16.058859    5221 round_trippers.go:580]     Audit-Id: d6531923-6836-4eca-a29c-5f6fab3b1917
	I0917 02:26:16.059029    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m03","uid":"ca6d8a0b-78e8-401d-8fd0-21af7b79983d","resourceVersion":"768","creationTimestamp":"2024-09-17T09:24:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_24_31_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:24:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3642 chars]
	I0917 02:26:16.059200    5221 pod_ready.go:93] pod "kube-proxy-xlb2z" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:16.059209    5221 pod_ready.go:82] duration metric: took 396.78299ms for pod "kube-proxy-xlb2z" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:16.059216    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:16.257918    5221 request.go:632] Waited for 198.659871ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-232000
	I0917 02:26:16.257982    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-232000
	I0917 02:26:16.257992    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:16.258001    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:16.258008    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:16.260492    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:16.260506    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:16.260514    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:16.260519    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:16.260522    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:16.260525    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:16 GMT
	I0917 02:26:16.260529    5221 round_trippers.go:580]     Audit-Id: 1cae5bad-3ef4-4f3e-a912-fe3e3e367819
	I0917 02:26:16.260532    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:16.260613    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-232000","namespace":"kube-system","uid":"a38a42a2-e0f9-4c6e-aa99-8dae3f326090","resourceVersion":"910","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6ab4348c7fba41d6fa49c901ef5e8acf","kubernetes.io/config.mirror":"6ab4348c7fba41d6fa49c901ef5e8acf","kubernetes.io/config.seen":"2024-09-17T09:21:50.527953069Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0917 02:26:16.458572    5221 request.go:632] Waited for 197.660697ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:16.458695    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:16.458708    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:16.458719    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:16.458728    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:16.461469    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:16.461484    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:16.461491    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:16.461497    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:16 GMT
	I0917 02:26:16.461521    5221 round_trippers.go:580]     Audit-Id: b6077a0b-d15d-401b-bcd3-5590a868f232
	I0917 02:26:16.461538    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:16.461545    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:16.461551    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:16.461661    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"912","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0917 02:26:16.461946    5221 pod_ready.go:93] pod "kube-scheduler-multinode-232000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:16.461957    5221 pod_ready.go:82] duration metric: took 402.733744ms for pod "kube-scheduler-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:16.461966    5221 pod_ready.go:39] duration metric: took 3.711495893s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:26:16.461980    5221 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:26:16.462054    5221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:26:16.475377    5221 command_runner.go:130] > 1651
	I0917 02:26:16.475657    5221 api_server.go:72] duration metric: took 14.002823163s to wait for apiserver process to appear ...
	I0917 02:26:16.475666    5221 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:26:16.475676    5221 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0917 02:26:16.479861    5221 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0917 02:26:16.479903    5221 round_trippers.go:463] GET https://192.169.0.14:8443/version
	I0917 02:26:16.479909    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:16.479914    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:16.479919    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:16.480431    5221 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 02:26:16.480440    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:16.480446    5221 round_trippers.go:580]     Content-Length: 263
	I0917 02:26:16.480450    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:16 GMT
	I0917 02:26:16.480455    5221 round_trippers.go:580]     Audit-Id: 610915a8-772d-405f-9fa4-0d73b790f14d
	I0917 02:26:16.480458    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:16.480461    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:16.480464    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:16.480467    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:16.480482    5221 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0917 02:26:16.480503    5221 api_server.go:141] control plane version: v1.31.1
	I0917 02:26:16.480511    5221 api_server.go:131] duration metric: took 4.840817ms to wait for apiserver health ...
	I0917 02:26:16.480515    5221 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 02:26:16.657815    5221 request.go:632] Waited for 177.21952ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0917 02:26:16.657870    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0917 02:26:16.657880    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:16.657893    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:16.657905    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:16.661856    5221 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:26:16.661877    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:16.661885    5221 round_trippers.go:580]     Audit-Id: d4a82d5e-3b36-420f-839b-4141c8b30993
	I0917 02:26:16.661890    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:16.661895    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:16.661898    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:16.661902    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:16.661905    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:16 GMT
	I0917 02:26:16.663033    5221 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"933"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"926","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 88910 chars]
	I0917 02:26:16.665172    5221 system_pods.go:59] 12 kube-system pods found
	I0917 02:26:16.665196    5221 system_pods.go:61] "coredns-7c65d6cfc9-hr8rd" [c990c87f-921e-45ba-845b-499147aaa1f9] Running
	I0917 02:26:16.665219    5221 system_pods.go:61] "etcd-multinode-232000" [023b8525-6267-41df-ab63-f9c82adf3da1] Running
	I0917 02:26:16.665223    5221 system_pods.go:61] "kindnet-7djsb" [4b28da1f-ce8e-43a9-bda0-e44de7b6d582] Running
	I0917 02:26:16.665226    5221 system_pods.go:61] "kindnet-bz9gj" [42665fdd-c209-43ac-8852-3fd0517abce4] Running
	I0917 02:26:16.665229    5221 system_pods.go:61] "kindnet-fgvhm" [f8fe7dd6-85d9-447e-88f1-d98d354a0802] Running
	I0917 02:26:16.665232    5221 system_pods.go:61] "kube-apiserver-multinode-232000" [4bc0fa4f-4ca7-478d-8b7c-b59c24a56faa] Running
	I0917 02:26:16.665254    5221 system_pods.go:61] "kube-controller-manager-multinode-232000" [788e2a30-fcea-4f4c-afc3-52d73d046e1d] Running
	I0917 02:26:16.665257    5221 system_pods.go:61] "kube-proxy-8fb4t" [e73b5d46-804f-4a13-a286-f0194436c3fc] Running
	I0917 02:26:16.665259    5221 system_pods.go:61] "kube-proxy-9s8zh" [8516d216-3857-4702-9656-97c8c91337fc] Running
	I0917 02:26:16.665261    5221 system_pods.go:61] "kube-proxy-xlb2z" [66e8dada-5a23-453e-ba6e-a9146d3467e7] Running
	I0917 02:26:16.665278    5221 system_pods.go:61] "kube-scheduler-multinode-232000" [a38a42a2-e0f9-4c6e-aa99-8dae3f326090] Running
	I0917 02:26:16.665281    5221 system_pods.go:61] "storage-provisioner" [878f83a8-de4f-48b8-98ac-2d34171091ae] Running
	I0917 02:26:16.665284    5221 system_pods.go:74] duration metric: took 184.765005ms to wait for pod list to return data ...
	I0917 02:26:16.665290    5221 default_sa.go:34] waiting for default service account to be created ...
	I0917 02:26:16.857156    5221 request.go:632] Waited for 191.815982ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:26:16.857190    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/default/serviceaccounts
	I0917 02:26:16.857194    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:16.857202    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:16.857228    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:16.859564    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:16.859574    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:16.859579    5221 round_trippers.go:580]     Audit-Id: 4c357a3b-ca0b-419a-a053-564ae9323865
	I0917 02:26:16.859595    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:16.859599    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:16.859601    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:16.859604    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:16.859606    5221 round_trippers.go:580]     Content-Length: 261
	I0917 02:26:16.859609    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:17 GMT
	I0917 02:26:16.859620    5221 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"933"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"76b391ab-85a5-440a-b857-8ab86887edea","resourceVersion":"366","creationTimestamp":"2024-09-17T09:22:01Z"}}]}
	I0917 02:26:16.859742    5221 default_sa.go:45] found service account: "default"
	I0917 02:26:16.859751    5221 default_sa.go:55] duration metric: took 194.455804ms for default service account to be created ...
	I0917 02:26:16.859756    5221 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 02:26:17.058197    5221 request.go:632] Waited for 198.397573ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0917 02:26:17.058307    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0917 02:26:17.058318    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:17.058330    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:17.058337    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:17.062328    5221 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:26:17.062340    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:17.062346    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:17.062349    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:17.062352    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:17.062355    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:17.062357    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:17 GMT
	I0917 02:26:17.062359    5221 round_trippers.go:580]     Audit-Id: 230385cc-abfc-4227-afab-112d2468a42d
	I0917 02:26:17.063236    5221 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"937"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"926","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 88910 chars]
	I0917 02:26:17.065166    5221 system_pods.go:86] 12 kube-system pods found
	I0917 02:26:17.065177    5221 system_pods.go:89] "coredns-7c65d6cfc9-hr8rd" [c990c87f-921e-45ba-845b-499147aaa1f9] Running
	I0917 02:26:17.065181    5221 system_pods.go:89] "etcd-multinode-232000" [023b8525-6267-41df-ab63-f9c82adf3da1] Running
	I0917 02:26:17.065184    5221 system_pods.go:89] "kindnet-7djsb" [4b28da1f-ce8e-43a9-bda0-e44de7b6d582] Running
	I0917 02:26:17.065192    5221 system_pods.go:89] "kindnet-bz9gj" [42665fdd-c209-43ac-8852-3fd0517abce4] Running
	I0917 02:26:17.065196    5221 system_pods.go:89] "kindnet-fgvhm" [f8fe7dd6-85d9-447e-88f1-d98d354a0802] Running
	I0917 02:26:17.065199    5221 system_pods.go:89] "kube-apiserver-multinode-232000" [4bc0fa4f-4ca7-478d-8b7c-b59c24a56faa] Running
	I0917 02:26:17.065202    5221 system_pods.go:89] "kube-controller-manager-multinode-232000" [788e2a30-fcea-4f4c-afc3-52d73d046e1d] Running
	I0917 02:26:17.065205    5221 system_pods.go:89] "kube-proxy-8fb4t" [e73b5d46-804f-4a13-a286-f0194436c3fc] Running
	I0917 02:26:17.065208    5221 system_pods.go:89] "kube-proxy-9s8zh" [8516d216-3857-4702-9656-97c8c91337fc] Running
	I0917 02:26:17.065211    5221 system_pods.go:89] "kube-proxy-xlb2z" [66e8dada-5a23-453e-ba6e-a9146d3467e7] Running
	I0917 02:26:17.065214    5221 system_pods.go:89] "kube-scheduler-multinode-232000" [a38a42a2-e0f9-4c6e-aa99-8dae3f326090] Running
	I0917 02:26:17.065217    5221 system_pods.go:89] "storage-provisioner" [878f83a8-de4f-48b8-98ac-2d34171091ae] Running
	I0917 02:26:17.065222    5221 system_pods.go:126] duration metric: took 205.460498ms to wait for k8s-apps to be running ...
	I0917 02:26:17.065231    5221 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 02:26:17.065289    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:26:17.077400    5221 system_svc.go:56] duration metric: took 12.165489ms WaitForService to wait for kubelet
	I0917 02:26:17.077424    5221 kubeadm.go:582] duration metric: took 14.604578473s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:26:17.077436    5221 node_conditions.go:102] verifying NodePressure condition ...
	I0917 02:26:17.257937    5221 request.go:632] Waited for 180.45319ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes
	I0917 02:26:17.258025    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes
	I0917 02:26:17.258042    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:17.258053    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:17.258060    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:17.260832    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:17.260845    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:17.260850    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:17 GMT
	I0917 02:26:17.260854    5221 round_trippers.go:580]     Audit-Id: 2739f084-3335-4335-8ce7-1a24cb542294
	I0917 02:26:17.260857    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:17.260859    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:17.260868    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:17.260876    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:17.260985    5221 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"937"},"items":[{"metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"934","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14676 chars]
	I0917 02:26:17.261391    5221 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:26:17.261400    5221 node_conditions.go:123] node cpu capacity is 2
	I0917 02:26:17.261407    5221 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:26:17.261410    5221 node_conditions.go:123] node cpu capacity is 2
	I0917 02:26:17.261413    5221 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:26:17.261415    5221 node_conditions.go:123] node cpu capacity is 2
	I0917 02:26:17.261418    5221 node_conditions.go:105] duration metric: took 183.977806ms to run NodePressure ...
	I0917 02:26:17.261425    5221 start.go:241] waiting for startup goroutines ...
	I0917 02:26:17.261430    5221 start.go:246] waiting for cluster config update ...
	I0917 02:26:17.261436    5221 start.go:255] writing updated cluster config ...
	I0917 02:26:17.283156    5221 out.go:201] 
	I0917 02:26:17.305215    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:26:17.305357    5221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/config.json ...
	I0917 02:26:17.327895    5221 out.go:177] * Starting "multinode-232000-m02" worker node in "multinode-232000" cluster
	I0917 02:26:17.370004    5221 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:26:17.370070    5221 cache.go:56] Caching tarball of preloaded images
	I0917 02:26:17.370252    5221 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:26:17.370270    5221 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:26:17.370405    5221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/config.json ...
	I0917 02:26:17.371558    5221 start.go:360] acquireMachinesLock for multinode-232000-m02: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:26:17.371654    5221 start.go:364] duration metric: took 74.026µs to acquireMachinesLock for "multinode-232000-m02"
	I0917 02:26:17.371699    5221 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:26:17.371706    5221 fix.go:54] fixHost starting: m02
	I0917 02:26:17.372148    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:26:17.372173    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:26:17.381644    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53453
	I0917 02:26:17.381974    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:26:17.382347    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:26:17.382362    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:26:17.382644    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:26:17.382777    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:26:17.382873    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetState
	I0917 02:26:17.382948    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:26:17.383030    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | hyperkit pid from json: 4823
	I0917 02:26:17.383936    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | hyperkit pid 4823 missing from process table
	I0917 02:26:17.383964    5221 fix.go:112] recreateIfNeeded on multinode-232000-m02: state=Stopped err=<nil>
	I0917 02:26:17.383972    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	W0917 02:26:17.384057    5221 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:26:17.404872    5221 out.go:177] * Restarting existing hyperkit VM for "multinode-232000-m02" ...
	I0917 02:26:17.446913    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .Start
	I0917 02:26:17.447212    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:26:17.447238    5221 main.go:141] libmachine: (multinode-232000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/hyperkit.pid
	I0917 02:26:17.448962    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | hyperkit pid 4823 missing from process table
	I0917 02:26:17.448978    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | pid 4823 is in state "Stopped"
	I0917 02:26:17.448998    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/hyperkit.pid...
	I0917 02:26:17.449328    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | Using UUID b4bb9835-5d54-4974-9049-06fa7b3612bb
	I0917 02:26:17.474940    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | Generated MAC 66:f1:ae:9f:da:63
	I0917 02:26:17.474963    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-232000
	I0917 02:26:17.475112    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b4bb9835-5d54-4974-9049-06fa7b3612bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0917 02:26:17.475146    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b4bb9835-5d54-4974-9049-06fa7b3612bb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0917 02:26:17.475192    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b4bb9835-5d54-4974-9049-06fa7b3612bb", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/multinode-232000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/bzimage,/Users/j
enkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-232000"}
	I0917 02:26:17.475233    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b4bb9835-5d54-4974-9049-06fa7b3612bb -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/multinode-232000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/mult
inode-232000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-232000"
	I0917 02:26:17.475261    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:26:17.476624    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 DEBUG: hyperkit: Pid is 5269
	I0917 02:26:17.477107    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | Attempt 0
	I0917 02:26:17.477117    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:26:17.477198    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | hyperkit pid from json: 5269
	I0917 02:26:17.479226    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | Searching for 66:f1:ae:9f:da:63 in /var/db/dhcpd_leases ...
	I0917 02:26:17.479282    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0917 02:26:17.479309    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9c80}
	I0917 02:26:17.479325    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66e94ae5}
	I0917 02:26:17.479340    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9bd8}
	I0917 02:26:17.479352    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | Found match: 66:f1:ae:9f:da:63
	I0917 02:26:17.479374    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | IP: 192.169.0.15
	I0917 02:26:17.479428    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetConfigRaw
	I0917 02:26:17.480194    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetIP
	I0917 02:26:17.480405    5221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/config.json ...
	I0917 02:26:17.480807    5221 machine.go:93] provisionDockerMachine start ...
	I0917 02:26:17.480817    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:26:17.480927    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:17.481023    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:17.481124    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:17.481257    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:17.481354    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:17.481479    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:26:17.481637    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0917 02:26:17.481644    5221 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:26:17.484653    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:26:17.492711    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:26:17.493633    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:26:17.493651    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:26:17.493672    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:26:17.493686    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:26:17.879934    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:26:17.879949    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:26:17.994816    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:26:17.994833    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:26:17.994842    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:26:17.994851    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:26:17.995685    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:26:17.995694    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:26:23.608811    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0917 02:26:23.608857    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0917 02:26:23.608876    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0917 02:26:23.633865    5221 main.go:141] libmachine: (multinode-232000-m02) DBG | 2024/09/17 02:26:23 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0917 02:26:28.556191    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:26:28.556216    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetMachineName
	I0917 02:26:28.556393    5221 buildroot.go:166] provisioning hostname "multinode-232000-m02"
	I0917 02:26:28.556402    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetMachineName
	I0917 02:26:28.556506    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:28.556598    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:28.556703    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:28.556798    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:28.556921    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:28.557063    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:26:28.557211    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0917 02:26:28.557219    5221 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-232000-m02 && echo "multinode-232000-m02" | sudo tee /etc/hostname
	I0917 02:26:28.632252    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-232000-m02
	
	I0917 02:26:28.632265    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:28.632409    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:28.632512    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:28.632609    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:28.632718    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:28.632874    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:26:28.633027    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0917 02:26:28.633039    5221 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-232000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-232000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-232000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:26:28.710080    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:26:28.710103    5221 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:26:28.710114    5221 buildroot.go:174] setting up certificates
	I0917 02:26:28.710131    5221 provision.go:84] configureAuth start
	I0917 02:26:28.710141    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetMachineName
	I0917 02:26:28.710271    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetIP
	I0917 02:26:28.710388    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:28.710467    5221 provision.go:143] copyHostCerts
	I0917 02:26:28.710497    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:26:28.710544    5221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:26:28.710549    5221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:26:28.710779    5221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:26:28.710990    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:26:28.711021    5221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:26:28.711026    5221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:26:28.711124    5221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:26:28.711271    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:26:28.711299    5221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:26:28.711304    5221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:26:28.711401    5221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:26:28.711570    5221 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.multinode-232000-m02 san=[127.0.0.1 192.169.0.15 localhost minikube multinode-232000-m02]
	I0917 02:26:28.767847    5221 provision.go:177] copyRemoteCerts
	I0917 02:26:28.767904    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:26:28.767932    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:28.768077    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:28.768180    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:28.768277    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:28.768362    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/id_rsa Username:docker}
	I0917 02:26:28.807655    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:26:28.807726    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:26:28.827158    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:26:28.827239    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0917 02:26:28.846504    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:26:28.846586    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 02:26:28.866159    5221 provision.go:87] duration metric: took 156.017573ms to configureAuth
	I0917 02:26:28.866173    5221 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:26:28.866339    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:26:28.866353    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:26:28.866487    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:28.866573    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:28.866675    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:28.866761    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:28.866842    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:28.866960    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:26:28.867086    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0917 02:26:28.867094    5221 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:26:28.929765    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:26:28.929778    5221 buildroot.go:70] root file system type: tmpfs
	I0917 02:26:28.929847    5221 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:26:28.929859    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:28.929993    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:28.930076    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:28.930156    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:28.930226    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:28.930358    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:26:28.930490    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0917 02:26:28.930533    5221 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.14"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:26:29.004352    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.14
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:26:29.004372    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:29.004507    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:29.004604    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:29.004705    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:29.004794    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:29.004947    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:26:29.005087    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0917 02:26:29.005099    5221 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:26:30.578097    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:26:30.578112    5221 machine.go:96] duration metric: took 13.097237764s to provisionDockerMachine
	I0917 02:26:30.578120    5221 start.go:293] postStartSetup for "multinode-232000-m02" (driver="hyperkit")
	I0917 02:26:30.578129    5221 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:26:30.578139    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:26:30.578333    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:26:30.578346    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:30.578450    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:30.578541    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:30.578631    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:30.578734    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/id_rsa Username:docker}
	I0917 02:26:30.621027    5221 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:26:30.624319    5221 command_runner.go:130] > NAME=Buildroot
	I0917 02:26:30.624327    5221 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0917 02:26:30.624331    5221 command_runner.go:130] > ID=buildroot
	I0917 02:26:30.624334    5221 command_runner.go:130] > VERSION_ID=2023.02.9
	I0917 02:26:30.624338    5221 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0917 02:26:30.624577    5221 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:26:30.624584    5221 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:26:30.624664    5221 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:26:30.624803    5221 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:26:30.624809    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:26:30.624967    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:26:30.632397    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:26:30.658340    5221 start.go:296] duration metric: took 80.207794ms for postStartSetup
	I0917 02:26:30.658367    5221 fix.go:56] duration metric: took 13.286595326s for fixHost
	I0917 02:26:30.658381    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:30.658518    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:30.658619    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:30.658712    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:30.658802    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:30.658947    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:26:30.659081    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0917 02:26:30.659088    5221 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:26:30.724970    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726565190.868042565
	
	I0917 02:26:30.724981    5221 fix.go:216] guest clock: 1726565190.868042565
	I0917 02:26:30.724987    5221 fix.go:229] Guest: 2024-09-17 02:26:30.868042565 -0700 PDT Remote: 2024-09-17 02:26:30.658372 -0700 PDT m=+79.726730067 (delta=209.670565ms)
	I0917 02:26:30.724998    5221 fix.go:200] guest clock delta is within tolerance: 209.670565ms
	I0917 02:26:30.725002    5221 start.go:83] releasing machines lock for "multinode-232000-m02", held for 13.353263705s
	I0917 02:26:30.725019    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:26:30.725145    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetIP
	I0917 02:26:30.750563    5221 out.go:177] * Found network options:
	I0917 02:26:30.771576    5221 out.go:177]   - NO_PROXY=192.169.0.14
	W0917 02:26:30.792469    5221 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:26:30.792511    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:26:30.793320    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:26:30.793625    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:26:30.793774    5221 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:26:30.793812    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	W0917 02:26:30.793892    5221 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:26:30.793999    5221 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:26:30.794002    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:30.794019    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:30.794233    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:30.794235    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:30.794447    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:30.794508    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:30.794622    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/id_rsa Username:docker}
	I0917 02:26:30.794660    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:30.794783    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/id_rsa Username:docker}
	I0917 02:26:30.831201    5221 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0917 02:26:30.831244    5221 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:26:30.831311    5221 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:26:30.884480    5221 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0917 02:26:30.884564    5221 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0917 02:26:30.884592    5221 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:26:30.884606    5221 start.go:495] detecting cgroup driver to use...
	I0917 02:26:30.884753    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:26:30.900568    5221 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0917 02:26:30.900827    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:26:30.909247    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:26:30.917752    5221 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:26:30.917806    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:26:30.926220    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:26:30.934462    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:26:30.942603    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:26:30.951114    5221 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:26:30.959472    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:26:30.968017    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:26:30.976360    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:26:30.984769    5221 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:26:30.992045    5221 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0917 02:26:30.992145    5221 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:26:30.999749    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:26:31.093570    5221 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:26:31.113288    5221 start.go:495] detecting cgroup driver to use...
	I0917 02:26:31.113365    5221 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:26:31.129793    5221 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0917 02:26:31.130343    5221 command_runner.go:130] > [Unit]
	I0917 02:26:31.130352    5221 command_runner.go:130] > Description=Docker Application Container Engine
	I0917 02:26:31.130357    5221 command_runner.go:130] > Documentation=https://docs.docker.com
	I0917 02:26:31.130362    5221 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0917 02:26:31.130367    5221 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0917 02:26:31.130373    5221 command_runner.go:130] > StartLimitBurst=3
	I0917 02:26:31.130377    5221 command_runner.go:130] > StartLimitIntervalSec=60
	I0917 02:26:31.130380    5221 command_runner.go:130] > [Service]
	I0917 02:26:31.130384    5221 command_runner.go:130] > Type=notify
	I0917 02:26:31.130387    5221 command_runner.go:130] > Restart=on-failure
	I0917 02:26:31.130391    5221 command_runner.go:130] > Environment=NO_PROXY=192.169.0.14
	I0917 02:26:31.130397    5221 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0917 02:26:31.130407    5221 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0917 02:26:31.130413    5221 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0917 02:26:31.130418    5221 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0917 02:26:31.130424    5221 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0917 02:26:31.130429    5221 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0917 02:26:31.130437    5221 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0917 02:26:31.130450    5221 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0917 02:26:31.130456    5221 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0917 02:26:31.130460    5221 command_runner.go:130] > ExecStart=
	I0917 02:26:31.130473    5221 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0917 02:26:31.130479    5221 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0917 02:26:31.130486    5221 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0917 02:26:31.130491    5221 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0917 02:26:31.130495    5221 command_runner.go:130] > LimitNOFILE=infinity
	I0917 02:26:31.130498    5221 command_runner.go:130] > LimitNPROC=infinity
	I0917 02:26:31.130501    5221 command_runner.go:130] > LimitCORE=infinity
	I0917 02:26:31.130506    5221 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0917 02:26:31.130511    5221 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0917 02:26:31.130515    5221 command_runner.go:130] > TasksMax=infinity
	I0917 02:26:31.130519    5221 command_runner.go:130] > TimeoutStartSec=0
	I0917 02:26:31.130524    5221 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0917 02:26:31.130528    5221 command_runner.go:130] > Delegate=yes
	I0917 02:26:31.130533    5221 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0917 02:26:31.130540    5221 command_runner.go:130] > KillMode=process
	I0917 02:26:31.130544    5221 command_runner.go:130] > [Install]
	I0917 02:26:31.130548    5221 command_runner.go:130] > WantedBy=multi-user.target
	I0917 02:26:31.130974    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:26:31.147559    5221 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:26:31.165969    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:26:31.176642    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:26:31.187736    5221 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:26:31.212484    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:26:31.223903    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:26:31.239009    5221 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0917 02:26:31.239082    5221 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:26:31.242058    5221 command_runner.go:130] > /usr/bin/cri-dockerd
	I0917 02:26:31.242223    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:26:31.249613    5221 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:26:31.263010    5221 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:26:31.361860    5221 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:26:31.471436    5221 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:26:31.471459    5221 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:26:31.485353    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:26:31.575231    5221 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:26:33.866979    5221 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.291718764s)
	I0917 02:26:33.867054    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:26:33.877320    5221 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:26:33.890130    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:26:33.900643    5221 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:26:34.003947    5221 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:26:34.111645    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:26:34.213673    5221 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:26:34.228040    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:26:34.239007    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:26:34.333302    5221 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:26:34.394346    5221 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:26:34.394419    5221 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:26:34.400434    5221 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0917 02:26:34.400458    5221 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0917 02:26:34.400478    5221 command_runner.go:130] > Device: 0,22	Inode: 753         Links: 1
	I0917 02:26:34.400487    5221 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0917 02:26:34.400494    5221 command_runner.go:130] > Access: 2024-09-17 09:26:34.490657603 +0000
	I0917 02:26:34.400506    5221 command_runner.go:130] > Modify: 2024-09-17 09:26:34.490657603 +0000
	I0917 02:26:34.400514    5221 command_runner.go:130] > Change: 2024-09-17 09:26:34.492657418 +0000
	I0917 02:26:34.400519    5221 command_runner.go:130] >  Birth: -
	I0917 02:26:34.400540    5221 start.go:563] Will wait 60s for crictl version
	I0917 02:26:34.400600    5221 ssh_runner.go:195] Run: which crictl
	I0917 02:26:34.404203    5221 command_runner.go:130] > /usr/bin/crictl
	I0917 02:26:34.404403    5221 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:26:34.428361    5221 command_runner.go:130] > Version:  0.1.0
	I0917 02:26:34.428373    5221 command_runner.go:130] > RuntimeName:  docker
	I0917 02:26:34.428378    5221 command_runner.go:130] > RuntimeVersion:  27.2.1
	I0917 02:26:34.428382    5221 command_runner.go:130] > RuntimeApiVersion:  v1
	I0917 02:26:34.429325    5221 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 02:26:34.429417    5221 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:26:34.445620    5221 command_runner.go:130] > 27.2.1
	I0917 02:26:34.446459    5221 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:26:34.461220    5221 command_runner.go:130] > 27.2.1
	I0917 02:26:34.507338    5221 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 02:26:34.528162    5221 out.go:177]   - env NO_PROXY=192.169.0.14
	I0917 02:26:34.549250    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetIP
	I0917 02:26:34.549631    5221 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0917 02:26:34.553785    5221 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:26:34.564128    5221 mustload.go:65] Loading cluster: multinode-232000
	I0917 02:26:34.564298    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:26:34.564519    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:26:34.564542    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:26:34.573114    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53474
	I0917 02:26:34.573439    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:26:34.573801    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:26:34.573822    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:26:34.574058    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:26:34.574195    5221 main.go:141] libmachine: (multinode-232000) Calling .GetState
	I0917 02:26:34.574285    5221 main.go:141] libmachine: (multinode-232000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:26:34.574351    5221 main.go:141] libmachine: (multinode-232000) DBG | hyperkit pid from json: 5233
	I0917 02:26:34.575311    5221 host.go:66] Checking if "multinode-232000" exists ...
	I0917 02:26:34.575577    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:26:34.575602    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:26:34.584073    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53476
	I0917 02:26:34.584410    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:26:34.584722    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:26:34.584735    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:26:34.584945    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:26:34.585060    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:26:34.585150    5221 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000 for IP: 192.169.0.15
	I0917 02:26:34.585156    5221 certs.go:194] generating shared ca certs ...
	I0917 02:26:34.585168    5221 certs.go:226] acquiring lock for ca certs: {Name:mkbc37b68ace578bc430db43facbca466a5a1602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:26:34.585314    5221 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key
	I0917 02:26:34.585369    5221 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key
	I0917 02:26:34.585378    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 02:26:34.585402    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 02:26:34.585420    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 02:26:34.585437    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 02:26:34.585511    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem (1338 bytes)
	W0917 02:26:34.585561    5221 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0917 02:26:34.585571    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 02:26:34.585610    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem (1078 bytes)
	I0917 02:26:34.585643    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:26:34.585677    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem (1675 bytes)
	I0917 02:26:34.585740    5221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:26:34.585778    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem -> /usr/share/ca-certificates/1560.pem
	I0917 02:26:34.585798    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /usr/share/ca-certificates/15602.pem
	I0917 02:26:34.585816    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:26:34.585840    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:26:34.605627    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 02:26:34.624677    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:26:34.643935    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:26:34.663544    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0917 02:26:34.682566    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0917 02:26:34.701196    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:26:34.720066    5221 ssh_runner.go:195] Run: openssl version
	I0917 02:26:34.724235    5221 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0917 02:26:34.724443    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0917 02:26:34.733732    5221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0917 02:26:34.736968    5221 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:26:34.737105    5221 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:55 /usr/share/ca-certificates/1560.pem
	I0917 02:26:34.737158    5221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0917 02:26:34.741230    5221 command_runner.go:130] > 51391683
	I0917 02:26:34.741425    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0917 02:26:34.750892    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0917 02:26:34.760076    5221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0917 02:26:34.763382    5221 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:26:34.763486    5221 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:55 /usr/share/ca-certificates/15602.pem
	I0917 02:26:34.763534    5221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0917 02:26:34.767716    5221 command_runner.go:130] > 3ec20f2e
	I0917 02:26:34.767896    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:26:34.777167    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:26:34.786647    5221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:26:34.790070    5221 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:26:34.790112    5221 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:26:34.790164    5221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:26:34.794472    5221 command_runner.go:130] > b5213941
	I0917 02:26:34.794524    5221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:26:34.803820    5221 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:26:34.806949    5221 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 02:26:34.807013    5221 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 02:26:34.807045    5221 kubeadm.go:934] updating node {m02 192.169.0.15 8443 v1.31.1 docker false true} ...
	I0917 02:26:34.807107    5221 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-232000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:26:34.807157    5221 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 02:26:34.815167    5221 command_runner.go:130] > kubeadm
	I0917 02:26:34.815176    5221 command_runner.go:130] > kubectl
	I0917 02:26:34.815180    5221 command_runner.go:130] > kubelet
	I0917 02:26:34.815283    5221 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:26:34.815336    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0917 02:26:34.823647    5221 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0917 02:26:34.837368    5221 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:26:34.850966    5221 ssh_runner.go:195] Run: grep 192.169.0.14	control-plane.minikube.internal$ /etc/hosts
	I0917 02:26:34.853962    5221 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:26:34.864545    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:26:34.968936    5221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:26:34.984578    5221 host.go:66] Checking if "multinode-232000" exists ...
	I0917 02:26:34.984882    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:26:34.984909    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:26:34.994092    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53478
	I0917 02:26:34.994489    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:26:34.994849    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:26:34.994863    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:26:34.995089    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:26:34.995220    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:26:34.995319    5221 start.go:317] joinCluster: &{Name:multinode-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:multinode-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.16 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:26:34.995412    5221 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0917 02:26:34.995434    5221 host.go:66] Checking if "multinode-232000-m02" exists ...
	I0917 02:26:34.995714    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:26:34.995739    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:26:35.004663    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53480
	I0917 02:26:35.005092    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:26:35.005399    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:26:35.005410    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:26:35.005639    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:26:35.005752    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:26:35.005845    5221 mustload.go:65] Loading cluster: multinode-232000
	I0917 02:26:35.006022    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:26:35.006268    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:26:35.006294    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:26:35.015188    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53482
	I0917 02:26:35.015530    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:26:35.015892    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:26:35.015909    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:26:35.016143    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:26:35.016263    5221 main.go:141] libmachine: (multinode-232000) Calling .GetState
	I0917 02:26:35.016347    5221 main.go:141] libmachine: (multinode-232000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:26:35.016429    5221 main.go:141] libmachine: (multinode-232000) DBG | hyperkit pid from json: 5233
	I0917 02:26:35.017415    5221 host.go:66] Checking if "multinode-232000" exists ...
	I0917 02:26:35.017676    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:26:35.017704    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:26:35.026838    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53484
	I0917 02:26:35.027207    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:26:35.027564    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:26:35.027581    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:26:35.027777    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:26:35.027890    5221 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:26:35.027986    5221 api_server.go:166] Checking apiserver status ...
	I0917 02:26:35.028041    5221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:26:35.028052    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:26:35.028139    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:26:35.028243    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:26:35.028330    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:26:35.028416    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/id_rsa Username:docker}
	I0917 02:26:35.067637    5221 command_runner.go:130] > 1651
	I0917 02:26:35.067709    5221 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1651/cgroup
	W0917 02:26:35.076285    5221 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1651/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:26:35.076359    5221 ssh_runner.go:195] Run: ls
	I0917 02:26:35.079921    5221 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0917 02:26:35.083627    5221 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0917 02:26:35.083687    5221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl drain multinode-232000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0917 02:26:35.164216    5221 command_runner.go:130] > node/multinode-232000-m02 cordoned
	I0917 02:26:38.193073    5221 command_runner.go:130] > pod "busybox-7dff88458-8tvvp" has DeletionTimestamp older than 1 seconds, skipping
	I0917 02:26:38.193087    5221 command_runner.go:130] > node/multinode-232000-m02 drained
	I0917 02:26:38.194698    5221 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-bz9gj, kube-system/kube-proxy-8fb4t
	I0917 02:26:38.194803    5221 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl drain multinode-232000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.111089392s)
	I0917 02:26:38.194813    5221 node.go:128] successfully drained node "multinode-232000-m02"
	I0917 02:26:38.194838    5221 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0917 02:26:38.194856    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:26:38.195024    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:26:38.195120    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:26:38.195213    5221 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:26:38.195283    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/id_rsa Username:docker}
	I0917 02:26:38.292934    5221 command_runner.go:130] ! W0917 09:26:38.440911    1326 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0917 02:26:38.336217    5221 command_runner.go:130] ! W0917 09:26:38.484146    1326 cleanupnode.go:105] [reset] Failed to remove containers: failed to stop running pod adb846aaec844e84568d1e66bb150b22c5064af45b85ce68490175a102fcf711: rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "busybox-7dff88458-8tvvp_default" network: cni config uninitialized
	I0917 02:26:38.338538    5221 command_runner.go:130] > [preflight] Running pre-flight checks
	I0917 02:26:38.338549    5221 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0917 02:26:38.338554    5221 command_runner.go:130] > [reset] Stopping the kubelet service
	I0917 02:26:38.338567    5221 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0917 02:26:38.338580    5221 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0917 02:26:38.338598    5221 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0917 02:26:38.338604    5221 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0917 02:26:38.338611    5221 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0917 02:26:38.338616    5221 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0917 02:26:38.338624    5221 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0917 02:26:38.338630    5221 command_runner.go:130] > to reset your system's IPVS tables.
	I0917 02:26:38.338638    5221 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0917 02:26:38.338651    5221 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0917 02:26:38.338661    5221 node.go:155] successfully reset node "multinode-232000-m02"
	I0917 02:26:38.338924    5221 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:26:38.339125    5221 kapi.go:59] client config for multinode-232000: &rest.Config{Host:"https://192.169.0.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x410b720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:26:38.339401    5221 request.go:1351] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0917 02:26:38.339439    5221 round_trippers.go:463] DELETE https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:38.339444    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:38.339450    5221 round_trippers.go:473]     Content-Type: application/json
	I0917 02:26:38.339454    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:38.339457    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:38.342177    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:38.342187    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:38.342192    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:38.342196    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:38.342199    5221 round_trippers.go:580]     Content-Length: 171
	I0917 02:26:38.342202    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:38 GMT
	I0917 02:26:38.342204    5221 round_trippers.go:580]     Audit-Id: e79ace76-551d-42c7-a3a2-f1570f343321
	I0917 02:26:38.342206    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:38.342208    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:38.342323    5221 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-232000-m02","kind":"nodes","uid":"b0d6988f-c01e-465b-b2df-6e79ea652296"}}
	I0917 02:26:38.342344    5221 node.go:180] successfully deleted node "multinode-232000-m02"
	I0917 02:26:38.342351    5221 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0917 02:26:38.342366    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0917 02:26:38.342378    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:26:38.342522    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:26:38.342644    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:26:38.342740    5221 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:26:38.342825    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/id_rsa Username:docker}
	I0917 02:26:38.448171    5221 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 8aym42.82ssaevmx169fm1f --discovery-token-ca-cert-hash sha256:7d55cb80c4cca3ccf2edf1a375f9019832d41d9684c82d992c80cf0c3419888b 
	I0917 02:26:38.449706    5221 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0917 02:26:38.449725    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8aym42.82ssaevmx169fm1f --discovery-token-ca-cert-hash sha256:7d55cb80c4cca3ccf2edf1a375f9019832d41d9684c82d992c80cf0c3419888b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-232000-m02"
	I0917 02:26:38.482499    5221 command_runner.go:130] > [preflight] Running pre-flight checks
	I0917 02:26:38.557396    5221 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0917 02:26:38.557413    5221 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0917 02:26:38.587915    5221 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 02:26:38.587930    5221 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 02:26:38.587935    5221 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0917 02:26:38.702866    5221 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 02:26:39.215760    5221 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 513.228573ms
	I0917 02:26:39.215775    5221 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0917 02:26:40.227022    5221 command_runner.go:130] > This node has joined the cluster:
	I0917 02:26:40.227037    5221 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0917 02:26:40.227043    5221 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0917 02:26:40.227048    5221 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0917 02:26:40.228914    5221 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 02:26:40.229057    5221 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8aym42.82ssaevmx169fm1f --discovery-token-ca-cert-hash sha256:7d55cb80c4cca3ccf2edf1a375f9019832d41d9684c82d992c80cf0c3419888b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-232000-m02": (1.779310241s)
	I0917 02:26:40.229078    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0917 02:26:40.449219    5221 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0917 02:26:40.449315    5221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-232000-m02 minikube.k8s.io/updated_at=2024_09_17T02_26_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61 minikube.k8s.io/name=multinode-232000 minikube.k8s.io/primary=false
	I0917 02:26:40.533346    5221 command_runner.go:130] > node/multinode-232000-m02 labeled
	I0917 02:26:40.533368    5221 start.go:319] duration metric: took 5.538024221s to joinCluster
	I0917 02:26:40.533402    5221 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0917 02:26:40.533622    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:26:40.556930    5221 out.go:177] * Verifying Kubernetes components...
	I0917 02:26:40.598764    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:26:40.690186    5221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:26:40.703527    5221 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 02:26:40.703718    5221 kapi.go:59] client config for multinode-232000: &rest.Config{Host:"https://192.169.0.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x410b720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:26:40.703914    5221 node_ready.go:35] waiting up to 6m0s for node "multinode-232000-m02" to be "Ready" ...
	I0917 02:26:40.703964    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:40.703969    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:40.703974    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:40.703979    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:40.705513    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:40.705522    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:40.705528    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:40.705532    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:40.705535    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:40.705551    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:40 GMT
	I0917 02:26:40.705559    5221 round_trippers.go:580]     Audit-Id: e9034616-90ec-437c-97a3-d918ead229a3
	I0917 02:26:40.705561    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:40.705635    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"980","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3564 chars]
	I0917 02:26:41.205042    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:41.205054    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:41.205061    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:41.205064    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:41.207847    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:41.207862    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:41.207868    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:41.207872    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:41.207875    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:41 GMT
	I0917 02:26:41.207878    5221 round_trippers.go:580]     Audit-Id: c6b8f0c2-bdfa-4f42-9bcd-a1d3f8563e06
	I0917 02:26:41.207880    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:41.207883    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:41.208045    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"980","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3564 chars]
	I0917 02:26:41.704598    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:41.704611    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:41.704617    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:41.704622    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:41.706638    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:41.706650    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:41.706654    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:41.706657    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:41 GMT
	I0917 02:26:41.706659    5221 round_trippers.go:580]     Audit-Id: 450959da-e275-4be9-8e1c-88a712a8a297
	I0917 02:26:41.706662    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:41.706664    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:41.706667    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:41.706900    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"980","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3564 chars]
	I0917 02:26:42.205166    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:42.205180    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:42.205186    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:42.205189    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:42.206905    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:42.206915    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:42.206921    5221 round_trippers.go:580]     Audit-Id: ab379873-93db-4948-aed0-622077ccb5b3
	I0917 02:26:42.206924    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:42.206926    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:42.206930    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:42.206939    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:42.206942    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:42 GMT
	I0917 02:26:42.207229    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:42.704754    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:42.704781    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:42.704792    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:42.704797    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:42.707400    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:42.707416    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:42.707423    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:42.707428    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:42 GMT
	I0917 02:26:42.707433    5221 round_trippers.go:580]     Audit-Id: b8237b06-1cee-42bc-acbb-0febe5fbdda1
	I0917 02:26:42.707436    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:42.707439    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:42.707442    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:42.707591    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:42.707812    5221 node_ready.go:53] node "multinode-232000-m02" has status "Ready":"False"
	I0917 02:26:43.206206    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:43.206233    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:43.206288    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:43.206301    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:43.210590    5221 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 02:26:43.210614    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:43.210621    5221 round_trippers.go:580]     Audit-Id: 9f069409-4e29-4849-bc2e-b27f90cbb81e
	I0917 02:26:43.210624    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:43.210627    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:43.210642    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:43.210649    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:43.210674    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:43 GMT
	I0917 02:26:43.210728    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:43.704522    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:43.704550    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:43.704562    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:43.704577    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:43.707127    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:43.707144    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:43.707154    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:43.707159    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:43.707164    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:43 GMT
	I0917 02:26:43.707167    5221 round_trippers.go:580]     Audit-Id: 175ec7a3-76b6-4ac4-948d-d1ec35a8370e
	I0917 02:26:43.707170    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:43.707174    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:43.707515    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:44.204276    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:44.204296    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:44.204307    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:44.204315    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:44.206349    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:44.206365    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:44.206375    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:44.206382    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:44.206388    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:44 GMT
	I0917 02:26:44.206393    5221 round_trippers.go:580]     Audit-Id: 4afbce11-1036-4327-8289-01a805771094
	I0917 02:26:44.206437    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:44.206447    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:44.206620    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:44.706137    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:44.706196    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:44.706207    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:44.706214    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:44.709595    5221 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:26:44.709610    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:44.709617    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:44 GMT
	I0917 02:26:44.709622    5221 round_trippers.go:580]     Audit-Id: d919e900-7db4-4738-8c26-ef42edd87761
	I0917 02:26:44.709625    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:44.709630    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:44.709641    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:44.709648    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:44.709758    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:44.709987    5221 node_ready.go:53] node "multinode-232000-m02" has status "Ready":"False"
	I0917 02:26:45.204865    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:45.204884    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:45.204895    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:45.204902    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:45.207236    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:45.207251    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:45.207258    5221 round_trippers.go:580]     Audit-Id: 4f05de8d-7d0d-4db6-bc65-2a86b66d44e1
	I0917 02:26:45.207263    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:45.207267    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:45.207271    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:45.207274    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:45.207277    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:45 GMT
	I0917 02:26:45.207387    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:45.705028    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:45.705058    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:45.705072    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:45.705081    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:45.707870    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:45.707885    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:45.707892    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:45.707897    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:45.707901    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:45.707905    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:45.707908    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:45 GMT
	I0917 02:26:45.707911    5221 round_trippers.go:580]     Audit-Id: cab2b667-95a2-4fa7-9823-9feb0bf49a7f
	I0917 02:26:45.708054    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:46.205958    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:46.206023    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:46.206081    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:46.206095    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:46.208712    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:46.208725    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:46.208731    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:46 GMT
	I0917 02:26:46.208736    5221 round_trippers.go:580]     Audit-Id: 1f2133c4-73b0-4118-9b54-2acbbd6468d5
	I0917 02:26:46.208740    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:46.208743    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:46.208748    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:46.208752    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:46.208874    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:46.706203    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:46.706234    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:46.706247    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:46.706254    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:46.708936    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:46.708953    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:46.708962    5221 round_trippers.go:580]     Audit-Id: 12806bb7-0594-4493-9672-25343dd3f338
	I0917 02:26:46.708971    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:46.708976    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:46.708979    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:46.708982    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:46.708986    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:46 GMT
	I0917 02:26:46.709119    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:47.205588    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:47.205604    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:47.205613    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:47.205617    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:47.207789    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:47.207801    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:47.207810    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:47.207814    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:47 GMT
	I0917 02:26:47.207817    5221 round_trippers.go:580]     Audit-Id: 1ebccb42-0f35-48e5-8d90-13029f3c23b1
	I0917 02:26:47.207820    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:47.207822    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:47.207825    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:47.208016    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:47.208208    5221 node_ready.go:53] node "multinode-232000-m02" has status "Ready":"False"
	I0917 02:26:47.705503    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:47.705532    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:47.705544    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:47.705551    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:47.708306    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:47.708328    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:47.708338    5221 round_trippers.go:580]     Audit-Id: 216a3399-f645-4943-8744-e2b320ec60bd
	I0917 02:26:47.708345    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:47.708351    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:47.708358    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:47.708367    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:47.708377    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:47 GMT
	I0917 02:26:47.708481    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:48.205451    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:48.205498    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:48.205509    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:48.205529    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:48.207304    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:48.207318    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:48.207323    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:48.207326    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:48.207328    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:48.207330    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:48 GMT
	I0917 02:26:48.207332    5221 round_trippers.go:580]     Audit-Id: bec87cad-1e4b-475c-bc6c-af883efcaa5c
	I0917 02:26:48.207335    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:48.207459    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:48.704919    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:48.704941    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:48.704953    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:48.704960    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:48.707743    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:48.707759    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:48.707766    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:48.707770    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:48.707775    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:48.707779    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:48 GMT
	I0917 02:26:48.707782    5221 round_trippers.go:580]     Audit-Id: fd08b290-f6db-4928-b6fd-5c0355dee24f
	I0917 02:26:48.707786    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:48.707935    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:49.204628    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:49.204654    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:49.204666    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:49.204673    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:49.207587    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:49.207604    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:49.207610    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:49.207615    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:49 GMT
	I0917 02:26:49.207618    5221 round_trippers.go:580]     Audit-Id: a11acfcb-5ab4-4229-b0d8-a32cafd6295d
	I0917 02:26:49.207636    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:49.207642    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:49.207646    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:49.207710    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1005","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0917 02:26:49.706162    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:49.706216    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:49.706229    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:49.706237    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:49.708948    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:49.708964    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:49.708970    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:49.708974    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:49.708977    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:49 GMT
	I0917 02:26:49.708981    5221 round_trippers.go:580]     Audit-Id: de84c1db-b516-416a-92ec-b1e8a2ffc5b9
	I0917 02:26:49.708985    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:49.708988    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:49.709091    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:49.709325    5221 node_ready.go:53] node "multinode-232000-m02" has status "Ready":"False"
	I0917 02:26:50.205145    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:50.205167    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:50.205179    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:50.205185    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:50.207791    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:50.207807    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:50.207814    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:50 GMT
	I0917 02:26:50.207818    5221 round_trippers.go:580]     Audit-Id: 661d91e3-ca0a-49c6-ad0a-98089ee256dc
	I0917 02:26:50.207837    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:50.207849    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:50.207854    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:50.207860    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:50.207971    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:50.706210    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:50.706237    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:50.706249    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:50.706254    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:50.709206    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:50.709231    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:50.709239    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:50 GMT
	I0917 02:26:50.709244    5221 round_trippers.go:580]     Audit-Id: 066c91b0-8955-452c-a6a4-1fd5d4cb52c1
	I0917 02:26:50.709248    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:50.709251    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:50.709256    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:50.709260    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:50.709614    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:51.205828    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:51.205853    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:51.205864    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:51.205869    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:51.208730    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:51.208749    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:51.208757    5221 round_trippers.go:580]     Audit-Id: a680a971-b3db-4a40-b236-751e281f0c10
	I0917 02:26:51.208778    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:51.208787    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:51.208791    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:51.208799    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:51.208803    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:51 GMT
	I0917 02:26:51.209057    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:51.704733    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:51.704759    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:51.704771    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:51.704787    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:51.707494    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:51.707520    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:51.707536    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:51 GMT
	I0917 02:26:51.707551    5221 round_trippers.go:580]     Audit-Id: 1c60bd6d-6119-45b2-8b03-b68f4fdfefc1
	I0917 02:26:51.707561    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:51.707564    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:51.707569    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:51.707572    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:51.707928    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:52.206016    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:52.206057    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:52.206068    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:52.206076    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:52.208749    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:52.208766    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:52.208787    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:52.208802    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:52.208811    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:52.208817    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:52.208821    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:52 GMT
	I0917 02:26:52.208825    5221 round_trippers.go:580]     Audit-Id: bd24169b-45b6-49b1-b352-c23101412f71
	I0917 02:26:52.209147    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:52.209381    5221 node_ready.go:53] node "multinode-232000-m02" has status "Ready":"False"
	I0917 02:26:52.704751    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:52.704777    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:52.704789    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:52.704797    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:52.707436    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:52.707452    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:52.707461    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:52.707466    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:52.707471    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:52.707482    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:52 GMT
	I0917 02:26:52.707487    5221 round_trippers.go:580]     Audit-Id: 1965a618-b2f5-4b72-89ec-7ec58c288586
	I0917 02:26:52.707490    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:52.707552    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:53.204650    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:53.204670    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:53.204695    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:53.204704    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:53.206425    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:53.206437    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:53.206443    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:53.206446    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:53.206449    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:53 GMT
	I0917 02:26:53.206451    5221 round_trippers.go:580]     Audit-Id: 5f47ce88-be06-470e-8276-4a5c0bf159e6
	I0917 02:26:53.206453    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:53.206456    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:53.206558    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:53.704222    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:53.704280    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:53.704296    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:53.704310    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:53.706876    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:53.706897    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:53.706908    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:53.706913    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:53.706935    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:53.706940    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:53 GMT
	I0917 02:26:53.706943    5221 round_trippers.go:580]     Audit-Id: 09381c89-2fbd-4747-b0e0-8e2517fdd396
	I0917 02:26:53.706946    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:53.707195    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:54.205593    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:54.205606    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.205613    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.205616    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.206954    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:54.206966    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.206972    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.206975    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.206979    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.206981    5221 round_trippers.go:580]     Audit-Id: b69f3ebc-ce49-4646-980b-04f4a53c14f8
	I0917 02:26:54.206984    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.206987    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.207107    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1019","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4066 chars]
	I0917 02:26:54.706281    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:54.706309    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.706361    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.706373    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.708895    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:54.708915    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.708922    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.708927    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.708951    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.708960    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.708963    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.708967    5221 round_trippers.go:580]     Audit-Id: c398d8de-3b2b-4ae2-986b-7f6884235f5d
	I0917 02:26:54.709060    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1027","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3932 chars]
	I0917 02:26:54.709297    5221 node_ready.go:49] node "multinode-232000-m02" has status "Ready":"True"
	I0917 02:26:54.709309    5221 node_ready.go:38] duration metric: took 14.005321244s for node "multinode-232000-m02" to be "Ready" ...
	I0917 02:26:54.709316    5221 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:26:54.709367    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0917 02:26:54.709374    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.709382    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.709387    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.711895    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:54.711906    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.711911    5221 round_trippers.go:580]     Audit-Id: 65810a52-6b1a-4681-9c14-47de07b164ab
	I0917 02:26:54.711915    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.711918    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.711922    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.711925    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.711928    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.712717    5221 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1027"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"926","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 89364 chars]
	I0917 02:26:54.714652    5221 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:54.714700    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hr8rd
	I0917 02:26:54.714705    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.714710    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.714714    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.715948    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:54.715959    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.715965    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.715967    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.715971    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.715978    5221 round_trippers.go:580]     Audit-Id: 11a52131-ec17-4cd6-9d95-dc6af5a8f9ad
	I0917 02:26:54.715989    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.716000    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.716173    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-hr8rd","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"c990c87f-921e-45ba-845b-499147aaa1f9","resourceVersion":"926","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8fc9c1e9-f113-4613-91a4-4b5177832446","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fc9c1e9-f113-4613-91a4-4b5177832446\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7039 chars]
	I0917 02:26:54.716435    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:54.716442    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.716451    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.716456    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.717441    5221 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 02:26:54.717451    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.717462    5221 round_trippers.go:580]     Audit-Id: 5743721b-89a4-4f23-baee-f74e75914f89
	I0917 02:26:54.717471    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.717477    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.717480    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.717483    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.717487    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.717595    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"934","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5172 chars]
	I0917 02:26:54.717764    5221 pod_ready.go:93] pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:54.717771    5221 pod_ready.go:82] duration metric: took 3.109535ms for pod "coredns-7c65d6cfc9-hr8rd" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:54.717777    5221 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:54.717813    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-232000
	I0917 02:26:54.717818    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.717826    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.717830    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.718847    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:54.718855    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.718861    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.718865    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.718868    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.718870    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.718872    5221 round_trippers.go:580]     Audit-Id: 52ce59f6-74f6-43c0-9fe8-101666220ed8
	I0917 02:26:54.718875    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.719114    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-232000","namespace":"kube-system","uid":"023b8525-6267-41df-ab63-f9c82adf3da1","resourceVersion":"895","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"04b75db992fd3241846557ea586378aa","kubernetes.io/config.mirror":"04b75db992fd3241846557ea586378aa","kubernetes.io/config.seen":"2024-09-17T09:21:50.527953777Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6663 chars]
	I0917 02:26:54.719319    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:54.719326    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.719332    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.719336    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.720347    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:54.720354    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.720359    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.720362    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.720365    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.720368    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.720371    5221 round_trippers.go:580]     Audit-Id: bef4ea84-9061-402b-9ff2-93b6b76f44a8
	I0917 02:26:54.720373    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.720475    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"934","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5172 chars]
	I0917 02:26:54.720645    5221 pod_ready.go:93] pod "etcd-multinode-232000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:54.720652    5221 pod_ready.go:82] duration metric: took 2.87192ms for pod "etcd-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:54.720669    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:54.720699    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-232000
	I0917 02:26:54.720703    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.720708    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.720712    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.721675    5221 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 02:26:54.721684    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.721700    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.721707    5221 round_trippers.go:580]     Audit-Id: 425ed1c6-8d21-4c82-830e-dbc18d1e8788
	I0917 02:26:54.721712    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.721716    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.721719    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.721723    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.721818    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-232000","namespace":"kube-system","uid":"4bc0fa4f-4ca7-478d-8b7c-b59c24a56faa","resourceVersion":"899","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.14:8443","kubernetes.io/config.hash":"67a66a13a29b1cc7d88ed81650cbab1c","kubernetes.io/config.mirror":"67a66a13a29b1cc7d88ed81650cbab1c","kubernetes.io/config.seen":"2024-09-17T09:21:50.527954370Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0917 02:26:54.722040    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:54.722046    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.722051    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.722056    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.722928    5221 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 02:26:54.722934    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.722938    5221 round_trippers.go:580]     Audit-Id: 70b33c2d-ed03-4f03-94ed-729d440b127f
	I0917 02:26:54.722948    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.722951    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.722953    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.722956    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.722958    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.723104    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"934","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5172 chars]
	I0917 02:26:54.723271    5221 pod_ready.go:93] pod "kube-apiserver-multinode-232000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:54.723279    5221 pod_ready.go:82] duration metric: took 2.605319ms for pod "kube-apiserver-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:54.723285    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:54.723313    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-232000
	I0917 02:26:54.723320    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.723335    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.723341    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.724439    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:54.724446    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.724451    5221 round_trippers.go:580]     Audit-Id: 7ff0d1c5-f140-4bf1-9475-96a70dce641b
	I0917 02:26:54.724454    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.724456    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.724459    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.724466    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.724469    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.724629    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-232000","namespace":"kube-system","uid":"788e2a30-fcea-4f4c-afc3-52d73d046e1d","resourceVersion":"914","creationTimestamp":"2024-09-17T09:21:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70ae9bf162cfffffa860fd43666d4b44","kubernetes.io/config.mirror":"70ae9bf162cfffffa860fd43666d4b44","kubernetes.io/config.seen":"2024-09-17T09:21:55.992286729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0917 02:26:54.724860    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:54.724867    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.724872    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.724875    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.725742    5221 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 02:26:54.725751    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.725759    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:54 GMT
	I0917 02:26:54.725763    5221 round_trippers.go:580]     Audit-Id: a524a2ff-b379-4ef9-a11a-100985947566
	I0917 02:26:54.725766    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.725769    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.725771    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.725774    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.725908    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"934","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5172 chars]
	I0917 02:26:54.726083    5221 pod_ready.go:93] pod "kube-controller-manager-multinode-232000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:54.726090    5221 pod_ready.go:82] duration metric: took 2.800799ms for pod "kube-controller-manager-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:54.726099    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8fb4t" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:54.907369    5221 request.go:632] Waited for 181.176649ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fb4t
	I0917 02:26:54.907432    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fb4t
	I0917 02:26:54.907443    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:54.907453    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:54.907459    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:54.910021    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:54.910037    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:54.910044    5221 round_trippers.go:580]     Audit-Id: 8873ea26-61d5-45b4-99a1-26e711d7fba6
	I0917 02:26:54.910048    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:54.910052    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:54.910056    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:54.910059    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:54.910065    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:55 GMT
	I0917 02:26:54.910206    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8fb4t","generateName":"kube-proxy-","namespace":"kube-system","uid":"e73b5d46-804f-4a13-a286-f0194436c3fc","resourceVersion":"1006","creationTimestamp":"2024-09-17T09:22:43Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fb6f259a-808b-4cb9-a679-3500f264f1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f259a-808b-4cb9-a679-3500f264f1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0917 02:26:55.107126    5221 request.go:632] Waited for 196.555016ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:55.107211    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m02
	I0917 02:26:55.107222    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:55.107233    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:55.107240    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:55.109622    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:55.109635    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:55.109642    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:55.109645    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:55.109648    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:55.109652    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:55.109657    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:55 GMT
	I0917 02:26:55.109660    5221 round_trippers.go:580]     Audit-Id: e64bfbe8-9684-4016-930e-e97450ef7e14
	I0917 02:26:55.109844    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m02","uid":"8a39e63f-585f-44ee-937f-50f7818097a1","resourceVersion":"1027","creationTimestamp":"2024-09-17T09:26:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_26_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3932 chars]
	I0917 02:26:55.110081    5221 pod_ready.go:93] pod "kube-proxy-8fb4t" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:55.110092    5221 pod_ready.go:82] duration metric: took 383.985246ms for pod "kube-proxy-8fb4t" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:55.110100    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9s8zh" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:55.306481    5221 request.go:632] Waited for 196.334498ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9s8zh
	I0917 02:26:55.306535    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9s8zh
	I0917 02:26:55.306541    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:55.306547    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:55.306550    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:55.308155    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:55.308164    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:55.308169    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:55.308172    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:55.308175    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:55.308178    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:55.308180    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:55 GMT
	I0917 02:26:55.308182    5221 round_trippers.go:580]     Audit-Id: 3f765a57-d32d-4cea-bbfa-e83fb9c0627d
	I0917 02:26:55.308309    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9s8zh","generateName":"kube-proxy-","namespace":"kube-system","uid":"8516d216-3857-4702-9656-97c8c91337fc","resourceVersion":"890","creationTimestamp":"2024-09-17T09:22:01Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fb6f259a-808b-4cb9-a679-3500f264f1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:22:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f259a-808b-4cb9-a679-3500f264f1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6394 chars]
	I0917 02:26:55.507848    5221 request.go:632] Waited for 199.260774ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:55.507923    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:55.507931    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:55.507939    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:55.507948    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:55.509929    5221 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0917 02:26:55.509943    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:55.509951    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:55.509958    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:55.509963    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:55 GMT
	I0917 02:26:55.509968    5221 round_trippers.go:580]     Audit-Id: cb92c8b5-1ddd-43cd-be4c-f0b2ac6cbacb
	I0917 02:26:55.509973    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:55.509977    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:55.510144    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"934","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5172 chars]
	I0917 02:26:55.510346    5221 pod_ready.go:93] pod "kube-proxy-9s8zh" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:55.510355    5221 pod_ready.go:82] duration metric: took 400.247776ms for pod "kube-proxy-9s8zh" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:55.510362    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xlb2z" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:55.707746    5221 request.go:632] Waited for 197.337667ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xlb2z
	I0917 02:26:55.707828    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xlb2z
	I0917 02:26:55.707846    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:55.707859    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:55.707865    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:55.710457    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:55.710472    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:55.710479    5221 round_trippers.go:580]     Audit-Id: 5c4edb6b-48c8-4507-a9e7-40e68cc85f8a
	I0917 02:26:55.710484    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:55.710488    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:55.710493    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:55.710497    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:55.710503    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:55 GMT
	I0917 02:26:55.710609    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xlb2z","generateName":"kube-proxy-","namespace":"kube-system","uid":"66e8dada-5a23-453e-ba6e-a9146d3467e7","resourceVersion":"996","creationTimestamp":"2024-09-17T09:23:37Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fb6f259a-808b-4cb9-a679-3500f264f1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:23:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb6f259a-808b-4cb9-a679-3500f264f1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6422 chars]
	I0917 02:26:55.908226    5221 request.go:632] Waited for 197.233912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m03
	I0917 02:26:55.908279    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000-m03
	I0917 02:26:55.908288    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:55.908299    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:55.908307    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:55.910888    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:55.910905    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:55.910912    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:56 GMT
	I0917 02:26:55.910916    5221 round_trippers.go:580]     Audit-Id: 0aac35a3-c1d3-4d6c-aa7b-84fdbbfe27ce
	I0917 02:26:55.910920    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:55.910923    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:55.910926    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:55.910930    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:55.911029    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000-m03","uid":"ca6d8a0b-78e8-401d-8fd0-21af7b79983d","resourceVersion":"1023","creationTimestamp":"2024-09-17T09:24:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_17T02_24_31_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:24:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4399 chars]
	I0917 02:26:55.911296    5221 pod_ready.go:98] node "multinode-232000-m03" hosting pod "kube-proxy-xlb2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000-m03" has status "Ready":"Unknown"
	I0917 02:26:55.911313    5221 pod_ready.go:82] duration metric: took 400.94378ms for pod "kube-proxy-xlb2z" in "kube-system" namespace to be "Ready" ...
	E0917 02:26:55.911322    5221 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-232000-m03" hosting pod "kube-proxy-xlb2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-232000-m03" has status "Ready":"Unknown"
	I0917 02:26:55.911346    5221 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:56.106469    5221 request.go:632] Waited for 195.008281ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-232000
	I0917 02:26:56.106529    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-232000
	I0917 02:26:56.106538    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:56.106549    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:56.106558    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:56.109150    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:56.109162    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:56.109167    5221 round_trippers.go:580]     Audit-Id: 25bb6df9-481a-4fbe-b913-9420ec1197db
	I0917 02:26:56.109171    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:56.109173    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:56.109176    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:56.109178    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:56.109180    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:56 GMT
	I0917 02:26:56.109357    5221 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-232000","namespace":"kube-system","uid":"a38a42a2-e0f9-4c6e-aa99-8dae3f326090","resourceVersion":"910","creationTimestamp":"2024-09-17T09:21:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6ab4348c7fba41d6fa49c901ef5e8acf","kubernetes.io/config.mirror":"6ab4348c7fba41d6fa49c901ef5e8acf","kubernetes.io/config.seen":"2024-09-17T09:21:50.527953069Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T09:21:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0917 02:26:56.308345    5221 request.go:632] Waited for 198.710667ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:56.308416    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-232000
	I0917 02:26:56.308458    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:56.308476    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:56.308484    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:56.310835    5221 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 02:26:56.310855    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:56.310863    5221 round_trippers.go:580]     Audit-Id: 320b3fc2-7426-4a90-b0c3-b33f2fdfef24
	I0917 02:26:56.310878    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:56.310882    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:56.310886    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:56.310889    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:56.310894    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:56 GMT
	I0917 02:26:56.311210    5221 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"934","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-17T09:21:53Z","fieldsType":"FieldsV1","fi [truncated 5172 chars]
	I0917 02:26:56.311400    5221 pod_ready.go:93] pod "kube-scheduler-multinode-232000" in "kube-system" namespace has status "Ready":"True"
	I0917 02:26:56.311409    5221 pod_ready.go:82] duration metric: took 400.0373ms for pod "kube-scheduler-multinode-232000" in "kube-system" namespace to be "Ready" ...
	I0917 02:26:56.311416    5221 pod_ready.go:39] duration metric: took 1.602085565s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 02:26:56.311428    5221 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 02:26:56.311490    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:26:56.322009    5221 system_svc.go:56] duration metric: took 10.575049ms WaitForService to wait for kubelet
	I0917 02:26:56.322028    5221 kubeadm.go:582] duration metric: took 15.78853698s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:26:56.322045    5221 node_conditions.go:102] verifying NodePressure condition ...
	I0917 02:26:56.506353    5221 request.go:632] Waited for 184.26297ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes
	I0917 02:26:56.506410    5221 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes
	I0917 02:26:56.506415    5221 round_trippers.go:469] Request Headers:
	I0917 02:26:56.506421    5221 round_trippers.go:473]     Accept: application/json, */*
	I0917 02:26:56.506426    5221 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0917 02:26:56.509766    5221 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 02:26:56.509781    5221 round_trippers.go:577] Response Headers:
	I0917 02:26:56.509787    5221 round_trippers.go:580]     Audit-Id: 12331aab-1be8-48a8-b6b2-a02524208e8a
	I0917 02:26:56.509792    5221 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 02:26:56.509796    5221 round_trippers.go:580]     Content-Type: application/json
	I0917 02:26:56.509801    5221 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7fa8f40a-99b6-46b4-8e0c-6c5ab04b31d2
	I0917 02:26:56.509814    5221 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a369ceca-80a0-43f7-a8dd-860c9a14313d
	I0917 02:26:56.509818    5221 round_trippers.go:580]     Date: Tue, 17 Sep 2024 09:26:56 GMT
	I0917 02:26:56.510062    5221 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1027"},"items":[{"metadata":{"name":"multinode-232000","uid":"08722ff4-7ac9-439b-95f9-86d2c62063b2","resourceVersion":"934","creationTimestamp":"2024-09-17T09:21:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-232000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"9256ba43b41ea130fa48757ddb8d93db00574f61","minikube.k8s.io/name":"multinode-232000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T02_21_56_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 15541 chars]
	I0917 02:26:56.510477    5221 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:26:56.510486    5221 node_conditions.go:123] node cpu capacity is 2
	I0917 02:26:56.510493    5221 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:26:56.510496    5221 node_conditions.go:123] node cpu capacity is 2
	I0917 02:26:56.510499    5221 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 02:26:56.510501    5221 node_conditions.go:123] node cpu capacity is 2
	I0917 02:26:56.510504    5221 node_conditions.go:105] duration metric: took 188.455004ms to run NodePressure ...
	I0917 02:26:56.510513    5221 start.go:241] waiting for startup goroutines ...
	I0917 02:26:56.510531    5221 start.go:255] writing updated cluster config ...
	I0917 02:26:56.531293    5221 out.go:201] 
	I0917 02:26:56.552169    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:26:56.552271    5221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/config.json ...
	I0917 02:26:56.574096    5221 out.go:177] * Starting "multinode-232000-m03" worker node in "multinode-232000" cluster
	I0917 02:26:56.616005    5221 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:26:56.616032    5221 cache.go:56] Caching tarball of preloaded images
	I0917 02:26:56.616181    5221 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 02:26:56.616194    5221 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:26:56.616287    5221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/config.json ...
	I0917 02:26:56.617058    5221 start.go:360] acquireMachinesLock for multinode-232000-m03: {Name:mkcb27accfc361dc9c175eeb8bb69dc9967aa557 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:26:56.617135    5221 start.go:364] duration metric: took 58.158µs to acquireMachinesLock for "multinode-232000-m03"
	I0917 02:26:56.617164    5221 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:26:56.617170    5221 fix.go:54] fixHost starting: m03
	I0917 02:26:56.617493    5221 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:26:56.617520    5221 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:26:56.626261    5221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53490
	I0917 02:26:56.626628    5221 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:26:56.626996    5221 main.go:141] libmachine: Using API Version  1
	I0917 02:26:56.627019    5221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:26:56.627250    5221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:26:56.627367    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .DriverName
	I0917 02:26:56.627481    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetState
	I0917 02:26:56.627571    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:26:56.627660    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | hyperkit pid from json: 5155
	I0917 02:26:56.628608    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | hyperkit pid 5155 missing from process table
	I0917 02:26:56.628607    5221 fix.go:112] recreateIfNeeded on multinode-232000-m03: state=Stopped err=<nil>
	I0917 02:26:56.628621    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .DriverName
	W0917 02:26:56.628704    5221 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:26:56.650179    5221 out.go:177] * Restarting existing hyperkit VM for "multinode-232000-m03" ...
	I0917 02:26:56.692010    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .Start
	I0917 02:26:56.692225    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:26:56.692273    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/hyperkit.pid
	I0917 02:26:56.692316    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | Using UUID d1ac9720-c400-4519-b59b-fee993a19e36
	I0917 02:26:56.718521    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | Generated MAC d2:11:43:9a:a8:47
	I0917 02:26:56.718543    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-232000
	I0917 02:26:56.718686    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d1ac9720-c400-4519-b59b-fee993a19e36", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b770)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0917 02:26:56.718712    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d1ac9720-c400-4519-b59b-fee993a19e36", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b770)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0917 02:26:56.718749    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "d1ac9720-c400-4519-b59b-fee993a19e36", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/multinode-232000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/bzimage,/Users/j
enkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-232000"}
	I0917 02:26:56.718788    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U d1ac9720-c400-4519-b59b-fee993a19e36 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/multinode-232000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/tty,log=/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/bzimage,/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/mult
inode-232000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-232000"
	I0917 02:26:56.718815    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0917 02:26:56.720344    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 DEBUG: hyperkit: Pid is 5295
	I0917 02:26:56.720831    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | Attempt 0
	I0917 02:26:56.720839    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:26:56.720922    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | hyperkit pid from json: 5295
	I0917 02:26:56.722031    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | Searching for d2:11:43:9a:a8:47 in /var/db/dhcpd_leases ...
	I0917 02:26:56.722096    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0917 02:26:56.722112    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:f1:ae:9f:da:63 ID:1,66:f1:ae:9f:da:63 Lease:0x66ea9cc2}
	I0917 02:26:56.722144    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:5a:1f:11:e5:b7:54 ID:1,5a:1f:11:e5:b7:54 Lease:0x66ea9c80}
	I0917 02:26:56.722175    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:d2:11:43:9a:a8:47 ID:1,d2:11:43:9a:a8:47 Lease:0x66e94ae5}
	I0917 02:26:56.722199    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetConfigRaw
	I0917 02:26:56.722195    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | Found match: d2:11:43:9a:a8:47
	I0917 02:26:56.722218    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | IP: 192.169.0.16
	I0917 02:26:56.722850    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetIP
	I0917 02:26:56.723062    5221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/multinode-232000/config.json ...
	I0917 02:26:56.723645    5221 machine.go:93] provisionDockerMachine start ...
	I0917 02:26:56.723658    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .DriverName
	I0917 02:26:56.723787    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:26:56.723888    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:26:56.724034    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:26:56.724135    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:26:56.724235    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:26:56.724355    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:26:56.724508    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0917 02:26:56.724514    5221 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:26:56.728260    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0917 02:26:56.737031    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0917 02:26:56.737901    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:26:56.737915    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:26:56.737922    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:26:56.737942    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:26:57.121903    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0917 02:26:57.121918    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0917 02:26:57.236666    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0917 02:26:57.236681    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0917 02:26:57.236689    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0917 02:26:57.236698    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0917 02:26:57.237536    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0917 02:26:57.237546    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:26:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0917 02:27:02.855427    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:27:02 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0917 02:27:02.855495    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:27:02 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0917 02:27:02.855506    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:27:02 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0917 02:27:02.878415    5221 main.go:141] libmachine: (multinode-232000-m03) DBG | 2024/09/17 02:27:02 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0917 02:27:07.791740    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:27:07.791764    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetMachineName
	I0917 02:27:07.791894    5221 buildroot.go:166] provisioning hostname "multinode-232000-m03"
	I0917 02:27:07.791903    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetMachineName
	I0917 02:27:07.791995    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:07.792069    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:07.792153    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:07.792230    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:07.792312    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:07.792431    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:27:07.792585    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0917 02:27:07.792593    5221 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-232000-m03 && echo "multinode-232000-m03" | sudo tee /etc/hostname
	I0917 02:27:07.864742    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-232000-m03
	
	I0917 02:27:07.864759    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:07.864886    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:07.864977    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:07.865068    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:07.865165    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:07.865308    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:27:07.865454    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0917 02:27:07.865465    5221 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-232000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-232000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-232000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:27:07.933359    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:27:07.933373    5221 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1025/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1025/.minikube}
	I0917 02:27:07.933385    5221 buildroot.go:174] setting up certificates
	I0917 02:27:07.933425    5221 provision.go:84] configureAuth start
	I0917 02:27:07.933432    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetMachineName
	I0917 02:27:07.933562    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetIP
	I0917 02:27:07.933682    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:07.933774    5221 provision.go:143] copyHostCerts
	I0917 02:27:07.933802    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:27:07.933860    5221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem, removing ...
	I0917 02:27:07.933866    5221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem
	I0917 02:27:07.933980    5221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/ca.pem (1078 bytes)
	I0917 02:27:07.934171    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:27:07.934210    5221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem, removing ...
	I0917 02:27:07.934215    5221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem
	I0917 02:27:07.934290    5221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/cert.pem (1123 bytes)
	I0917 02:27:07.934431    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:27:07.934474    5221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem, removing ...
	I0917 02:27:07.934485    5221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem
	I0917 02:27:07.934560    5221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1025/.minikube/key.pem (1675 bytes)
	I0917 02:27:07.934706    5221 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca-key.pem org=jenkins.multinode-232000-m03 san=[127.0.0.1 192.169.0.16 localhost minikube multinode-232000-m03]
	I0917 02:27:08.109556    5221 provision.go:177] copyRemoteCerts
	I0917 02:27:08.109624    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:27:08.109639    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:08.109791    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:08.109895    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:08.110014    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:08.110101    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/id_rsa Username:docker}
	I0917 02:27:08.148644    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 02:27:08.148715    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0917 02:27:08.170743    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 02:27:08.170816    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 02:27:08.190414    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 02:27:08.190477    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 02:27:08.210572    5221 provision.go:87] duration metric: took 277.137929ms to configureAuth
	I0917 02:27:08.210586    5221 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:27:08.210763    5221 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:27:08.210777    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .DriverName
	I0917 02:27:08.210906    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:08.210999    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:08.211084    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:08.211160    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:08.211235    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:08.211354    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:27:08.211487    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0917 02:27:08.211495    5221 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:27:08.274645    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:27:08.274660    5221 buildroot.go:70] root file system type: tmpfs
	I0917 02:27:08.274730    5221 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:27:08.274739    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:08.274865    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:08.274954    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:08.275039    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:08.275122    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:08.275247    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:27:08.275381    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0917 02:27:08.275425    5221 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.14"
	Environment="NO_PROXY=192.169.0.14,192.169.0.15"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:27:08.347017    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.14
	Environment=NO_PROXY=192.169.0.14,192.169.0.15
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:27:08.347037    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:08.347171    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:08.347271    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:08.347376    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:08.347483    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:08.347637    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:27:08.347777    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0917 02:27:08.347789    5221 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:27:09.913613    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:27:09.913630    5221 machine.go:96] duration metric: took 13.189915264s to provisionDockerMachine
	I0917 02:27:09.913639    5221 start.go:293] postStartSetup for "multinode-232000-m03" (driver="hyperkit")
	I0917 02:27:09.913647    5221 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:27:09.913658    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .DriverName
	I0917 02:27:09.913851    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:27:09.913865    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:09.913960    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:09.914053    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:09.914143    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:09.914233    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/id_rsa Username:docker}
	I0917 02:27:09.951318    5221 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:27:09.954295    5221 command_runner.go:130] > NAME=Buildroot
	I0917 02:27:09.954304    5221 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0917 02:27:09.954308    5221 command_runner.go:130] > ID=buildroot
	I0917 02:27:09.954312    5221 command_runner.go:130] > VERSION_ID=2023.02.9
	I0917 02:27:09.954315    5221 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0917 02:27:09.954478    5221 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 02:27:09.954487    5221 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/addons for local assets ...
	I0917 02:27:09.954582    5221 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1025/.minikube/files for local assets ...
	I0917 02:27:09.954755    5221 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0917 02:27:09.954761    5221 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem -> /etc/ssl/certs/15602.pem
	I0917 02:27:09.954962    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:27:09.962189    5221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0917 02:27:09.981823    5221 start.go:296] duration metric: took 68.175521ms for postStartSetup
	I0917 02:27:09.981853    5221 fix.go:56] duration metric: took 13.364613789s for fixHost
	I0917 02:27:09.981867    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:09.981997    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:09.982080    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:09.982170    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:09.982246    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:09.982368    5221 main.go:141] libmachine: Using SSH client type: native
	I0917 02:27:09.982503    5221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2a35820] 0x2a38500 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0917 02:27:09.982510    5221 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:27:10.044353    5221 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726565230.013398915
	
	I0917 02:27:10.044364    5221 fix.go:216] guest clock: 1726565230.013398915
	I0917 02:27:10.044369    5221 fix.go:229] Guest: 2024-09-17 02:27:10.013398915 -0700 PDT Remote: 2024-09-17 02:27:09.981858 -0700 PDT m=+119.050037971 (delta=31.540915ms)
	I0917 02:27:10.044388    5221 fix.go:200] guest clock delta is within tolerance: 31.540915ms
	I0917 02:27:10.044393    5221 start.go:83] releasing machines lock for "multinode-232000-m03", held for 13.427188639s
	I0917 02:27:10.044408    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .DriverName
	I0917 02:27:10.044524    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetIP
	I0917 02:27:10.068467    5221 out.go:177] * Found network options:
	I0917 02:27:10.089147    5221 out.go:177]   - NO_PROXY=192.169.0.14,192.169.0.15
	W0917 02:27:10.110282    5221 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:27:10.110310    5221 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:27:10.110330    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .DriverName
	I0917 02:27:10.110971    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .DriverName
	I0917 02:27:10.111122    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .DriverName
	I0917 02:27:10.111213    5221 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:27:10.111241    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	W0917 02:27:10.111277    5221 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 02:27:10.111297    5221 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 02:27:10.111371    5221 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 02:27:10.111385    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHHostname
	I0917 02:27:10.111392    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:10.111568    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHPort
	I0917 02:27:10.111592    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:10.111731    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:10.111772    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHKeyPath
	I0917 02:27:10.111891    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/id_rsa Username:docker}
	I0917 02:27:10.111918    5221 main.go:141] libmachine: (multinode-232000-m03) Calling .GetSSHUsername
	I0917 02:27:10.112051    5221 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m03/id_rsa Username:docker}
	I0917 02:27:10.148705    5221 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0917 02:27:10.148728    5221 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:27:10.148796    5221 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 02:27:10.208951    5221 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0917 02:27:10.209004    5221 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0917 02:27:10.209023    5221 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:27:10.209033    5221 start.go:495] detecting cgroup driver to use...
	I0917 02:27:10.209116    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:27:10.224400    5221 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0917 02:27:10.224628    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 02:27:10.233324    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:27:10.242088    5221 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:27:10.242138    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:27:10.250958    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:27:10.259862    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:27:10.268732    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:27:10.277394    5221 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:27:10.286530    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:27:10.295429    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:27:10.304249    5221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:27:10.313055    5221 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:27:10.321020    5221 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0917 02:27:10.321157    5221 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:27:10.329565    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:27:10.437126    5221 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:27:10.454772    5221 start.go:495] detecting cgroup driver to use...
	I0917 02:27:10.454854    5221 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:27:10.474710    5221 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0917 02:27:10.475164    5221 command_runner.go:130] > [Unit]
	I0917 02:27:10.475174    5221 command_runner.go:130] > Description=Docker Application Container Engine
	I0917 02:27:10.475179    5221 command_runner.go:130] > Documentation=https://docs.docker.com
	I0917 02:27:10.475198    5221 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0917 02:27:10.475206    5221 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0917 02:27:10.475211    5221 command_runner.go:130] > StartLimitBurst=3
	I0917 02:27:10.475215    5221 command_runner.go:130] > StartLimitIntervalSec=60
	I0917 02:27:10.475218    5221 command_runner.go:130] > [Service]
	I0917 02:27:10.475221    5221 command_runner.go:130] > Type=notify
	I0917 02:27:10.475224    5221 command_runner.go:130] > Restart=on-failure
	I0917 02:27:10.475229    5221 command_runner.go:130] > Environment=NO_PROXY=192.169.0.14
	I0917 02:27:10.475233    5221 command_runner.go:130] > Environment=NO_PROXY=192.169.0.14,192.169.0.15
	I0917 02:27:10.475240    5221 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0917 02:27:10.475250    5221 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0917 02:27:10.475256    5221 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0917 02:27:10.475261    5221 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0917 02:27:10.475267    5221 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0917 02:27:10.475272    5221 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0917 02:27:10.475283    5221 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0917 02:27:10.475289    5221 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0917 02:27:10.475294    5221 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0917 02:27:10.475299    5221 command_runner.go:130] > ExecStart=
	I0917 02:27:10.475312    5221 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0917 02:27:10.475316    5221 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0917 02:27:10.475322    5221 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0917 02:27:10.475331    5221 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0917 02:27:10.475334    5221 command_runner.go:130] > LimitNOFILE=infinity
	I0917 02:27:10.475338    5221 command_runner.go:130] > LimitNPROC=infinity
	I0917 02:27:10.475341    5221 command_runner.go:130] > LimitCORE=infinity
	I0917 02:27:10.475346    5221 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0917 02:27:10.475351    5221 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0917 02:27:10.475354    5221 command_runner.go:130] > TasksMax=infinity
	I0917 02:27:10.475357    5221 command_runner.go:130] > TimeoutStartSec=0
	I0917 02:27:10.475362    5221 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0917 02:27:10.475366    5221 command_runner.go:130] > Delegate=yes
	I0917 02:27:10.475375    5221 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0917 02:27:10.475379    5221 command_runner.go:130] > KillMode=process
	I0917 02:27:10.475382    5221 command_runner.go:130] > [Install]
	I0917 02:27:10.475387    5221 command_runner.go:130] > WantedBy=multi-user.target
	I0917 02:27:10.475467    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:27:10.487090    5221 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:27:10.505533    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:27:10.516787    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:27:10.527672    5221 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:27:10.547133    5221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:27:10.557296    5221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:27:10.571773    5221 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0917 02:27:10.572066    5221 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:27:10.574829    5221 command_runner.go:130] > /usr/bin/cri-dockerd
	I0917 02:27:10.575045    5221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:27:10.582206    5221 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 02:27:10.595639    5221 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:27:10.704292    5221 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:27:10.818639    5221 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:27:10.818670    5221 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:27:10.832988    5221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:27:10.931490    5221 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:28:11.810763    5221 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0917 02:28:11.810778    5221 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0917 02:28:11.810852    5221 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.879066143s)
	I0917 02:28:11.810930    5221 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0917 02:28:11.820846    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0917 02:28:11.820860    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:08.587012293Z" level=info msg="Starting up"
	I0917 02:28:11.820868    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:08.587727927Z" level=info msg="containerd not running, starting managed containerd"
	I0917 02:28:11.820881    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:08.588278751Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=494
	I0917 02:28:11.820889    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.604257552Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	I0917 02:28:11.820899    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620120903Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0917 02:28:11.820909    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620146681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0917 02:28:11.820918    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620184469Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0917 02:28:11.820927    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620194716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0917 02:28:11.820937    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620335138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0917 02:28:11.820946    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620374123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0917 02:28:11.820965    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620521898Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0917 02:28:11.820976    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620558023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0917 02:28:11.820987    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620570804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0917 02:28:11.820996    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620578774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0917 02:28:11.821007    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620679363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0917 02:28:11.821016    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620870887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0917 02:28:11.821030    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622470881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0917 02:28:11.821041    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622510433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0917 02:28:11.821141    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622614354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0917 02:28:11.821157    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622647767Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0917 02:28:11.821168    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622750438Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0917 02:28:11.821176    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622793925Z" level=info msg="metadata content store policy set" policy=shared
	I0917 02:28:11.821184    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624278427Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0917 02:28:11.821194    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624325218Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0917 02:28:11.821202    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624338472Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0917 02:28:11.821211    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624348654Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0917 02:28:11.821219    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624360500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0917 02:28:11.821228    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624450205Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0917 02:28:11.821237    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624612298Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0917 02:28:11.821245    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624684799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0917 02:28:11.821254    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624696377Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0917 02:28:11.821263    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624704926Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0917 02:28:11.821273    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624720392Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0917 02:28:11.821284    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624732730Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0917 02:28:11.821294    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624741016Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0917 02:28:11.821302    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624762305Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0917 02:28:11.821311    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624773829Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0917 02:28:11.821320    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624782485Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0917 02:28:11.821471    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624791242Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0917 02:28:11.821487    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624799058Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0917 02:28:11.821509    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624812700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821522    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624821844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821531    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624838386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821540    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624849680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821553    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624860870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821562    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624869678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821571    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624877407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821579    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624885574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821589    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624894140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821597    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624903681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821606    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624911167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821614    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624918808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821627    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624926384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821636    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624935585Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0917 02:28:11.821644    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624951098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821653    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624959500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821662    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624967057Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0917 02:28:11.821671    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624995177Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0917 02:28:11.821683    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625006123Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0917 02:28:11.821693    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625013538Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0917 02:28:11.821772    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625021457Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0917 02:28:11.821785    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625027736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0917 02:28:11.821797    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625037164Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0917 02:28:11.821805    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625044080Z" level=info msg="NRI interface is disabled by configuration."
	I0917 02:28:11.821815    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625194820Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0917 02:28:11.821823    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625267645Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0917 02:28:11.821831    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625321861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0917 02:28:11.821840    5221 command_runner.go:130] > Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625334867Z" level=info msg="containerd successfully booted in 0.021716s"
	I0917 02:28:11.821848    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.607440214Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0917 02:28:11.821856    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.629515088Z" level=info msg="Loading containers: start."
	I0917 02:28:11.821875    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.728163971Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0917 02:28:11.821885    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.797005402Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0917 02:28:11.821893    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.846572511Z" level=info msg="Loading containers: done."
	I0917 02:28:11.821903    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.854213853Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	I0917 02:28:11.821911    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.854405276Z" level=info msg="Daemon has completed initialization"
	I0917 02:28:11.821919    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.877998533Z" level=info msg="API listen on /var/run/docker.sock"
	I0917 02:28:11.821927    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.878088127Z" level=info msg="API listen on [::]:2376"
	I0917 02:28:11.821934    5221 command_runner.go:130] > Sep 17 09:27:09 multinode-232000-m03 systemd[1]: Started Docker Application Container Engine.
	I0917 02:28:11.821943    5221 command_runner.go:130] > Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.933377209Z" level=info msg="Processing signal 'terminated'"
	I0917 02:28:11.821954    5221 command_runner.go:130] > Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934105331Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0917 02:28:11.821965    5221 command_runner.go:130] > Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934523529Z" level=info msg="Daemon shutdown complete"
	I0917 02:28:11.821978    5221 command_runner.go:130] > Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934593980Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0917 02:28:11.821989    5221 command_runner.go:130] > Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934602401Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0917 02:28:11.822018    5221 command_runner.go:130] > Sep 17 09:27:10 multinode-232000-m03 systemd[1]: Stopping Docker Application Container Engine...
	I0917 02:28:11.822024    5221 command_runner.go:130] > Sep 17 09:27:11 multinode-232000-m03 systemd[1]: docker.service: Deactivated successfully.
	I0917 02:28:11.822033    5221 command_runner.go:130] > Sep 17 09:27:11 multinode-232000-m03 systemd[1]: Stopped Docker Application Container Engine.
	I0917 02:28:11.822039    5221 command_runner.go:130] > Sep 17 09:27:11 multinode-232000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0917 02:28:11.822045    5221 command_runner.go:130] > Sep 17 09:27:11 multinode-232000-m03 dockerd[873]: time="2024-09-17T09:27:11.969616869Z" level=info msg="Starting up"
	I0917 02:28:11.822054    5221 command_runner.go:130] > Sep 17 09:28:11 multinode-232000-m03 dockerd[873]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0917 02:28:11.822063    5221 command_runner.go:130] > Sep 17 09:28:11 multinode-232000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0917 02:28:11.822070    5221 command_runner.go:130] > Sep 17 09:28:11 multinode-232000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0917 02:28:11.822076    5221 command_runner.go:130] > Sep 17 09:28:11 multinode-232000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	I0917 02:28:11.862553    5221 out.go:201] 
	W0917 02:28:11.899560    5221 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 17 09:27:08 multinode-232000-m03 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 09:27:08 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:08.587012293Z" level=info msg="Starting up"
	Sep 17 09:27:08 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:08.587727927Z" level=info msg="containerd not running, starting managed containerd"
	Sep 17 09:27:08 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:08.588278751Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=494
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.604257552Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620120903Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620146681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620184469Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620194716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620335138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620374123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620521898Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620558023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620570804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620578774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620679363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.620870887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622470881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622510433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622614354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622647767Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622750438Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.622793925Z" level=info msg="metadata content store policy set" policy=shared
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624278427Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624325218Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624338472Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624348654Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624360500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624450205Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624612298Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624684799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624696377Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624704926Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624720392Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624732730Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624741016Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624762305Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624773829Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624782485Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624791242Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624799058Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624812700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624821844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624838386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624849680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624860870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624869678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624877407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624885574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624894140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624903681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624911167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624918808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624926384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624935585Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624951098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624959500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624967057Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.624995177Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625006123Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625013538Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625021457Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625027736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625037164Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625044080Z" level=info msg="NRI interface is disabled by configuration."
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625194820Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625267645Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625321861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 17 09:27:08 multinode-232000-m03 dockerd[494]: time="2024-09-17T09:27:08.625334867Z" level=info msg="containerd successfully booted in 0.021716s"
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.607440214Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.629515088Z" level=info msg="Loading containers: start."
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.728163971Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.797005402Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.846572511Z" level=info msg="Loading containers: done."
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.854213853Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.854405276Z" level=info msg="Daemon has completed initialization"
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.877998533Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 17 09:27:09 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:09.878088127Z" level=info msg="API listen on [::]:2376"
	Sep 17 09:27:09 multinode-232000-m03 systemd[1]: Started Docker Application Container Engine.
	Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.933377209Z" level=info msg="Processing signal 'terminated'"
	Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934105331Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934523529Z" level=info msg="Daemon shutdown complete"
	Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934593980Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 17 09:27:10 multinode-232000-m03 dockerd[488]: time="2024-09-17T09:27:10.934602401Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 17 09:27:10 multinode-232000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Sep 17 09:27:11 multinode-232000-m03 systemd[1]: docker.service: Deactivated successfully.
	Sep 17 09:27:11 multinode-232000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Sep 17 09:27:11 multinode-232000-m03 systemd[1]: Starting Docker Application Container Engine...
	Sep 17 09:27:11 multinode-232000-m03 dockerd[873]: time="2024-09-17T09:27:11.969616869Z" level=info msg="Starting up"
	Sep 17 09:28:11 multinode-232000-m03 dockerd[873]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 17 09:28:11 multinode-232000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 09:28:11 multinode-232000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 17 09:28:11 multinode-232000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0917 02:28:11.899670    5221 out.go:270] * 
	W0917 02:28:11.900924    5221 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:28:11.963413    5221 out.go:201] 
	
	
	==> Docker <==
	Sep 17 09:26:14 multinode-232000 dockerd[848]: time="2024-09-17T09:26:14.878927507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:26:14 multinode-232000 dockerd[848]: time="2024-09-17T09:26:14.879021324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:26:14 multinode-232000 dockerd[848]: time="2024-09-17T09:26:14.879163937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:26:14 multinode-232000 dockerd[848]: time="2024-09-17T09:26:14.882499716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:26:14 multinode-232000 dockerd[848]: time="2024-09-17T09:26:14.882634219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:26:14 multinode-232000 dockerd[848]: time="2024-09-17T09:26:14.882701242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:26:14 multinode-232000 dockerd[848]: time="2024-09-17T09:26:14.882956950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:26:15 multinode-232000 cri-dockerd[1093]: time="2024-09-17T09:26:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/889c45609f834b5276c894b4e452db72b2ccf8c22434c8f58949f4b49335c687/resolv.conf as [nameserver 192.169.0.1]"
	Sep 17 09:26:15 multinode-232000 cri-dockerd[1093]: time="2024-09-17T09:26:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d9c7acd1b8551873488d74f89eb4a8e7439af07d8db3d2a8629dd80cf3130aed/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 17 09:26:15 multinode-232000 dockerd[848]: time="2024-09-17T09:26:15.147867185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:26:15 multinode-232000 dockerd[848]: time="2024-09-17T09:26:15.148042751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:26:15 multinode-232000 dockerd[848]: time="2024-09-17T09:26:15.149482354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:26:15 multinode-232000 dockerd[848]: time="2024-09-17T09:26:15.151311395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:26:15 multinode-232000 dockerd[848]: time="2024-09-17T09:26:15.165338196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:26:15 multinode-232000 dockerd[848]: time="2024-09-17T09:26:15.165869431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:26:15 multinode-232000 dockerd[848]: time="2024-09-17T09:26:15.165903559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:26:15 multinode-232000 dockerd[848]: time="2024-09-17T09:26:15.166056880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:26:29 multinode-232000 dockerd[841]: time="2024-09-17T09:26:29.690613388Z" level=info msg="ignoring event" container=01991b1846976c59303ab99e6e160edac66cf3d1e203ed487c84a56bf9588948 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 09:26:29 multinode-232000 dockerd[848]: time="2024-09-17T09:26:29.691731247Z" level=info msg="shim disconnected" id=01991b1846976c59303ab99e6e160edac66cf3d1e203ed487c84a56bf9588948 namespace=moby
	Sep 17 09:26:29 multinode-232000 dockerd[848]: time="2024-09-17T09:26:29.691977285Z" level=warning msg="cleaning up after shim disconnected" id=01991b1846976c59303ab99e6e160edac66cf3d1e203ed487c84a56bf9588948 namespace=moby
	Sep 17 09:26:29 multinode-232000 dockerd[848]: time="2024-09-17T09:26:29.691989594Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 09:26:45 multinode-232000 dockerd[848]: time="2024-09-17T09:26:45.070000460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 09:26:45 multinode-232000 dockerd[848]: time="2024-09-17T09:26:45.070064282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 09:26:45 multinode-232000 dockerd[848]: time="2024-09-17T09:26:45.070078002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:26:45 multinode-232000 dockerd[848]: time="2024-09-17T09:26:45.070605071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1b14529377e94       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   71aa6a3a8feb7       storage-provisioner
	7689950d2261a       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   d9c7acd1b8551       busybox-7dff88458-7npgw
	eeb6c506c2a47       c69fa2e9cbf5f                                                                                         About a minute ago   Running             coredns                   1                   889c45609f834       coredns-7c65d6cfc9-hr8rd
	e9d635b43c8e1       12968670680f4                                                                                         2 minutes ago        Running             kindnet-cni               1                   7f797360a5052       kindnet-fgvhm
	70ffa012ff8dc       60c005f310ff3                                                                                         2 minutes ago        Running             kube-proxy                1                   5679b2dc89f6d       kube-proxy-9s8zh
	01991b1846976       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   71aa6a3a8feb7       storage-provisioner
	365bfc87a5e16       2e96e5913fc06                                                                                         2 minutes ago        Running             etcd                      1                   782e50d1be8b0       etcd-multinode-232000
	b1e15049accbf       175ffd71cce3d                                                                                         2 minutes ago        Running             kube-controller-manager   1                   3d00ffa0881b0       kube-controller-manager-multinode-232000
	77a62050273b4       6bab7719df100                                                                                         2 minutes ago        Running             kube-apiserver            1                   07a34f5103ff1       kube-apiserver-multinode-232000
	111a1121421ea       9aa1fad941575                                                                                         2 minutes ago        Running             kube-scheduler            1                   70a141e56a868       kube-scheduler-multinode-232000
	02ad82d87bd03       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago        Exited              busybox                   0                   a348ccf368bcb       busybox-7dff88458-7npgw
	8b2f4ea197c51       c69fa2e9cbf5f                                                                                         5 minutes ago        Exited              coredns                   0                   84e22c05755ca       coredns-7c65d6cfc9-hr8rd
	3dc3bd4da839a       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              6 minutes ago        Exited              kindnet-cni               0                   90f44d581694f       kindnet-fgvhm
	96e8ac7b181c4       60c005f310ff3                                                                                         6 minutes ago        Exited              kube-proxy                0                   b6a933d5abb7f       kube-proxy-9s8zh
	ab8e6362f1336       2e96e5913fc06                                                                                         6 minutes ago        Exited              etcd                      0                   f9ddf66585b54       etcd-multinode-232000
	5db9fa24f683f       9aa1fad941575                                                                                         6 minutes ago        Exited              kube-scheduler            0                   8e04470f77bc8       kube-scheduler-multinode-232000
	8e788bff41ec4       6bab7719df100                                                                                         6 minutes ago        Exited              kube-apiserver            0                   8998ef0cd2fb4       kube-apiserver-multinode-232000
	ff3a45c5df2e1       175ffd71cce3d                                                                                         6 minutes ago        Exited              kube-controller-manager   0                   77ac0fcdf71bc       kube-controller-manager-multinode-232000
	
	
	==> coredns [8b2f4ea197c5] <==
	[INFO] 10.244.1.2:34867 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000097412s
	[INFO] 10.244.1.2:54362 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097727s
	[INFO] 10.244.1.2:59960 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008534s
	[INFO] 10.244.1.2:43033 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000093939s
	[INFO] 10.244.1.2:50986 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000045381s
	[INFO] 10.244.1.2:43666 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108517s
	[INFO] 10.244.1.2:36813 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083489s
	[INFO] 10.244.0.3:51868 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087101s
	[INFO] 10.244.0.3:56904 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008814s
	[INFO] 10.244.0.3:33196 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009993s
	[INFO] 10.244.0.3:46415 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163053s
	[INFO] 10.244.1.2:42183 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013597s
	[INFO] 10.244.1.2:54400 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090858s
	[INFO] 10.244.1.2:34403 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115036s
	[INFO] 10.244.1.2:37668 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092514s
	[INFO] 10.244.0.3:60755 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011917s
	[INFO] 10.244.0.3:57106 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00020772s
	[INFO] 10.244.0.3:38771 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000096732s
	[INFO] 10.244.0.3:44267 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00007786s
	[INFO] 10.244.1.2:45789 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012747s
	[INFO] 10.244.1.2:44720 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000070558s
	[INFO] 10.244.1.2:35545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127584s
	[INFO] 10.244.1.2:46634 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000080838s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [eeb6c506c2a4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53314 - 48100 "HINFO IN 6001783425185009512.402387547379973602. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.018553476s
	
	
	==> describe nodes <==
	Name:               multinode-232000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-232000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=multinode-232000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T02_21_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:21:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-232000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:28:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:26:12 +0000   Tue, 17 Sep 2024 09:21:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:26:12 +0000   Tue, 17 Sep 2024 09:21:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:26:12 +0000   Tue, 17 Sep 2024 09:21:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:26:12 +0000   Tue, 17 Sep 2024 09:26:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.14
	  Hostname:    multinode-232000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 affbae943e4d4b2eafcba43350c9eaf7
	  System UUID:                807442ba-0000-0000-b144-29938f44cef0
	  Boot ID:                    06187b79-79ea-4d00-907a-7d92fba31e02
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7npgw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 coredns-7c65d6cfc9-hr8rd                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m12s
	  kube-system                 etcd-multinode-232000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m19s
	  kube-system                 kindnet-fgvhm                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m12s
	  kube-system                 kube-apiserver-multinode-232000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-controller-manager-multinode-232000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-proxy-9s8zh                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-scheduler-multinode-232000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m11s                  kube-proxy       
	  Normal  Starting                 2m13s                  kube-proxy       
	  Normal  Starting                 6m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m23s (x8 over 6m23s)  kubelet          Node multinode-232000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m23s (x8 over 6m23s)  kubelet          Node multinode-232000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m23s (x7 over 6m23s)  kubelet          Node multinode-232000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    6m17s                  kubelet          Node multinode-232000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  6m17s                  kubelet          Node multinode-232000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     6m17s                  kubelet          Node multinode-232000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m13s                  node-controller  Node multinode-232000 event: Registered Node multinode-232000 in Controller
	  Normal  NodeReady                5m53s                  kubelet          Node multinode-232000 status is now: NodeReady
	  Normal  Starting                 2m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m18s (x8 over 2m18s)  kubelet          Node multinode-232000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m18s (x8 over 2m18s)  kubelet          Node multinode-232000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m18s (x7 over 2m18s)  kubelet          Node multinode-232000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m12s                  node-controller  Node multinode-232000 event: Registered Node multinode-232000 in Controller
	
	
	Name:               multinode-232000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-232000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=multinode-232000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T02_26_40_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:26:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-232000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:28:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:26:54 +0000   Tue, 17 Sep 2024 09:26:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:26:54 +0000   Tue, 17 Sep 2024 09:26:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:26:54 +0000   Tue, 17 Sep 2024 09:26:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:26:54 +0000   Tue, 17 Sep 2024 09:26:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.15
	  Hostname:    multinode-232000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 0d1bc9bea1da4dd3bb7db8d244bf96a8
	  System UUID:                b4bb4974-0000-0000-9049-06fa7b3612bb
	  Boot ID:                    09fdfb15-5591-4af5-b471-87baf90f44dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bz9gj       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m30s
	  kube-system                 kube-proxy-8fb4t    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m22s                  kube-proxy  
	  Normal  Starting                 91s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  5m30s (x2 over 5m30s)  kubelet     Node multinode-232000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m30s (x2 over 5m30s)  kubelet     Node multinode-232000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m30s (x2 over 5m30s)  kubelet     Node multinode-232000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m30s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m7s                   kubelet     Node multinode-232000-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  94s (x2 over 94s)      kubelet     Node multinode-232000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s (x2 over 94s)      kubelet     Node multinode-232000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s (x2 over 94s)      kubelet     Node multinode-232000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  94s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                79s                    kubelet     Node multinode-232000-m02 status is now: NodeReady
	
	
	Name:               multinode-232000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-232000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=multinode-232000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T02_24_31_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:24:31 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-232000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:24:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 17 Sep 2024 09:24:49 +0000   Tue, 17 Sep 2024 09:26:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 17 Sep 2024 09:24:49 +0000   Tue, 17 Sep 2024 09:26:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 17 Sep 2024 09:24:49 +0000   Tue, 17 Sep 2024 09:26:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 17 Sep 2024 09:24:49 +0000   Tue, 17 Sep 2024 09:26:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.16
	  Hostname:    multinode-232000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 324299ae409844598b456acdc070fb58
	  System UUID:                d1ac4519-0000-0000-b59b-fee993a19e36
	  Boot ID:                    bb219e3a-4ccd-49af-bfcd-6dbee3f8dd31
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-q7wj6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kindnet-7djsb              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m36s
	  kube-system                 kube-proxy-xlb2z           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m29s                  kube-proxy       
	  Normal  Starting                 3m40s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    4m36s (x2 over 4m37s)  kubelet          Node multinode-232000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s (x2 over 4m37s)  kubelet          Node multinode-232000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m36s (x2 over 4m37s)  kubelet          Node multinode-232000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                4m13s                  kubelet          Node multinode-232000-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m42s (x2 over 3m42s)  kubelet          Node multinode-232000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m42s (x2 over 3m42s)  kubelet          Node multinode-232000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m42s (x2 over 3m42s)  kubelet          Node multinode-232000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m24s                  kubelet          Node multinode-232000-m03 status is now: NodeReady
	  Normal  RegisteredNode           2m12s                  node-controller  Node multinode-232000-m03 event: Registered Node multinode-232000-m03 in Controller
	  Normal  NodeNotReady             91s                    node-controller  Node multinode-232000-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.007953] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.689997] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006920] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.701080] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.258754] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +24.584053] systemd-fstab-generator[484]: Ignoring "noauto" option for root device
	[  +0.105378] systemd-fstab-generator[496]: Ignoring "noauto" option for root device
	[  +1.843604] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +0.258479] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.099561] systemd-fstab-generator[818]: Ignoring "noauto" option for root device
	[  +0.108304] systemd-fstab-generator[832]: Ignoring "noauto" option for root device
	[  +2.436636] systemd-fstab-generator[1046]: Ignoring "noauto" option for root device
	[  +0.096546] systemd-fstab-generator[1058]: Ignoring "noauto" option for root device
	[  +0.106060] systemd-fstab-generator[1070]: Ignoring "noauto" option for root device
	[  +0.050803] kauditd_printk_skb: 239 callbacks suppressed
	[  +0.077700] systemd-fstab-generator[1085]: Ignoring "noauto" option for root device
	[  +0.405707] systemd-fstab-generator[1214]: Ignoring "noauto" option for root device
	[  +1.815295] systemd-fstab-generator[1347]: Ignoring "noauto" option for root device
	[  +4.570091] kauditd_printk_skb: 128 callbacks suppressed
	[Sep17 09:26] systemd-fstab-generator[2198]: Ignoring "noauto" option for root device
	[ +12.159467] kauditd_printk_skb: 72 callbacks suppressed
	[ +14.749844] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [365bfc87a5e1] <==
	{"level":"info","ts":"2024-09-17T09:25:55.963373Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.169.0.14:2380"}
	{"level":"info","ts":"2024-09-17T09:25:55.963512Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.169.0.14:2380"}
	{"level":"info","ts":"2024-09-17T09:25:55.963943Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-17T09:25:55.964825Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-17T09:25:55.965050Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-17T09:25:55.965437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 switched to configuration voters=(3612125861281190545)"}
	{"level":"info","ts":"2024-09-17T09:25:55.965708Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9b2185e42760b005","local-member-id":"3220d9553daad291","added-peer-id":"3220d9553daad291","added-peer-peer-urls":["https://192.169.0.14:2380"]}
	{"level":"info","ts":"2024-09-17T09:25:55.966030Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9b2185e42760b005","local-member-id":"3220d9553daad291","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T09:25:55.966132Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T09:25:57.545929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T09:25:57.545972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:25:57.545989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 received MsgPreVoteResp from 3220d9553daad291 at term 2"}
	{"level":"info","ts":"2024-09-17T09:25:57.545996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 became candidate at term 3"}
	{"level":"info","ts":"2024-09-17T09:25:57.546001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 received MsgVoteResp from 3220d9553daad291 at term 3"}
	{"level":"info","ts":"2024-09-17T09:25:57.546035Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 became leader at term 3"}
	{"level":"info","ts":"2024-09-17T09:25:57.546043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3220d9553daad291 elected leader 3220d9553daad291 at term 3"}
	{"level":"info","ts":"2024-09-17T09:25:57.546776Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3220d9553daad291","local-member-attributes":"{Name:multinode-232000 ClientURLs:[https://192.169.0.14:2379]}","request-path":"/0/members/3220d9553daad291/attributes","cluster-id":"9b2185e42760b005","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T09:25:57.546842Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T09:25:57.547441Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T09:25:57.547747Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T09:25:57.548255Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T09:25:57.546955Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T09:25:57.549472Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T09:25:57.550078Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T09:25:57.551190Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.14:2379"}
	
	
	==> etcd [ab8e6362f133] <==
	{"level":"info","ts":"2024-09-17T09:21:52.393609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 became leader at term 2"}
	{"level":"info","ts":"2024-09-17T09:21:52.393615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3220d9553daad291 elected leader 3220d9553daad291 at term 2"}
	{"level":"info","ts":"2024-09-17T09:21:52.399766Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T09:21:52.401561Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9b2185e42760b005","local-member-id":"3220d9553daad291","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T09:21:52.401633Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T09:21:52.401861Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T09:21:52.401818Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3220d9553daad291","local-member-attributes":"{Name:multinode-232000 ClientURLs:[https://192.169.0.14:2379]}","request-path":"/0/members/3220d9553daad291/attributes","cluster-id":"9b2185e42760b005","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T09:21:52.402025Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T09:21:52.402174Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T09:21:52.402492Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T09:21:52.402569Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T09:21:52.403043Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T09:21:52.404876Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T09:21:52.405040Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T09:21:52.408278Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.14:2379"}
	{"level":"info","ts":"2024-09-17T09:25:03.162998Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-17T09:25:03.163040Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-232000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.14:2380"],"advertise-client-urls":["https://192.169.0.14:2379"]}
	{"level":"warn","ts":"2024-09-17T09:25:03.163088Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T09:25:03.163144Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T09:25:03.173788Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.14:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T09:25:03.173813Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.14:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-17T09:25:03.176302Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3220d9553daad291","current-leader-member-id":"3220d9553daad291"}
	{"level":"info","ts":"2024-09-17T09:25:03.180596Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.14:2380"}
	{"level":"info","ts":"2024-09-17T09:25:03.180685Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.14:2380"}
	{"level":"info","ts":"2024-09-17T09:25:03.180694Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-232000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.14:2380"],"advertise-client-urls":["https://192.169.0.14:2379"]}
	
	
	==> kernel <==
	 09:28:14 up 3 min,  0 users,  load average: 0.05, 0.08, 0.03
	Linux multinode-232000 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3dc3bd4da839] <==
	I0917 09:24:26.047789       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0917 09:24:26.047965       1 main.go:299] handling current node
	I0917 09:24:26.048103       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0917 09:24:26.048184       1 main.go:322] Node multinode-232000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:24:26.048443       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0917 09:24:26.048593       1 main.go:322] Node multinode-232000-m03 has CIDR [10.244.3.0/24] 
	I0917 09:24:36.044229       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0917 09:24:36.044264       1 main.go:299] handling current node
	I0917 09:24:36.044373       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0917 09:24:36.044383       1 main.go:322] Node multinode-232000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:24:36.044568       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0917 09:24:36.044622       1 main.go:322] Node multinode-232000-m03 has CIDR [10.244.4.0/24] 
	I0917 09:24:36.044693       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 192.169.0.16 Flags: [] Table: 0} 
	I0917 09:24:46.043573       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0917 09:24:46.043749       1 main.go:322] Node multinode-232000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:24:46.043998       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0917 09:24:46.044125       1 main.go:322] Node multinode-232000-m03 has CIDR [10.244.4.0/24] 
	I0917 09:24:46.044552       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0917 09:24:46.044668       1 main.go:299] handling current node
	I0917 09:24:56.048936       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0917 09:24:56.048957       1 main.go:322] Node multinode-232000-m03 has CIDR [10.244.4.0/24] 
	I0917 09:24:56.049016       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0917 09:24:56.049021       1 main.go:299] handling current node
	I0917 09:24:56.049030       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0917 09:24:56.049033       1 main.go:322] Node multinode-232000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [e9d635b43c8e] <==
	I0917 09:27:30.558276       1 main.go:322] Node multinode-232000-m03 has CIDR [10.244.4.0/24] 
	I0917 09:27:40.560806       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0917 09:27:40.561086       1 main.go:299] handling current node
	I0917 09:27:40.561154       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0917 09:27:40.561178       1 main.go:322] Node multinode-232000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:27:40.561311       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0917 09:27:40.561526       1 main.go:322] Node multinode-232000-m03 has CIDR [10.244.4.0/24] 
	I0917 09:27:50.561873       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0917 09:27:50.562105       1 main.go:299] handling current node
	I0917 09:27:50.562197       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0917 09:27:50.562305       1 main.go:322] Node multinode-232000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:27:50.562535       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0917 09:27:50.562653       1 main.go:322] Node multinode-232000-m03 has CIDR [10.244.4.0/24] 
	I0917 09:28:00.556577       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0917 09:28:00.556711       1 main.go:299] handling current node
	I0917 09:28:00.556749       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0917 09:28:00.556776       1 main.go:322] Node multinode-232000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:28:00.557002       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0917 09:28:00.557107       1 main.go:322] Node multinode-232000-m03 has CIDR [10.244.4.0/24] 
	I0917 09:28:10.563095       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0917 09:28:10.563329       1 main.go:299] handling current node
	I0917 09:28:10.563394       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0917 09:28:10.563474       1 main.go:322] Node multinode-232000-m02 has CIDR [10.244.1.0/24] 
	I0917 09:28:10.563663       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0917 09:28:10.563749       1 main.go:322] Node multinode-232000-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [77a62050273b] <==
	I0917 09:25:58.486656       1 shared_informer.go:320] Caches are synced for configmaps
	I0917 09:25:58.486779       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0917 09:25:58.488966       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 09:25:58.492094       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 09:25:58.492121       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 09:25:58.494149       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0917 09:25:58.494314       1 aggregator.go:171] initial CRD sync complete...
	I0917 09:25:58.494359       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 09:25:58.494364       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 09:25:58.494368       1 cache.go:39] Caches are synced for autoregister controller
	I0917 09:25:58.497641       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 09:25:58.507272       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0917 09:25:58.514159       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0917 09:25:58.531139       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0917 09:25:58.533500       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 09:25:58.533557       1 policy_source.go:224] refreshing policies
	I0917 09:25:58.571193       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 09:25:59.392341       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0917 09:26:00.600824       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0917 09:26:00.692067       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0917 09:26:00.701719       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0917 09:26:00.740475       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 09:26:00.745276       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 09:26:02.049463       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 09:26:02.299919       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [8e788bff41ec] <==
	W0917 09:25:03.175505       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175532       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175567       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175593       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175619       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175645       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175672       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175698       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175724       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175774       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175798       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175819       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175839       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175858       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175876       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175898       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175917       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175938       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175945       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.175985       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.176058       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.176091       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.176112       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 09:25:03.176116       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0917 09:25:03.194545       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [b1e15049accb] <==
	I0917 09:26:35.385533       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.331531ms"
	I0917 09:26:35.385642       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="67.493µs"
	I0917 09:26:38.511412       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-232000-m03"
	I0917 09:26:38.512229       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m02"
	I0917 09:26:39.509997       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-232000-m02\" does not exist"
	I0917 09:26:39.510335       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-232000-m03"
	I0917 09:26:39.519627       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-232000-m02" podCIDRs=["10.244.1.0/24"]
	I0917 09:26:39.519676       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m02"
	I0917 09:26:39.519692       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m02"
	I0917 09:26:39.520784       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m02"
	I0917 09:26:40.396127       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m02"
	I0917 09:26:40.700672       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m02"
	I0917 09:26:41.367161       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="31.06µs"
	I0917 09:26:42.083500       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:26:42.091107       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:26:42.162847       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m02"
	I0917 09:26:49.704255       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m02"
	I0917 09:26:52.241532       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:26:54.575317       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m02"
	I0917 09:26:54.575451       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-232000-m02"
	I0917 09:26:54.581139       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m02"
	I0917 09:26:57.108888       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m02"
	I0917 09:27:07.397656       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.037µs"
	I0917 09:27:07.584945       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.105µs"
	I0917 09:27:07.586200       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.388µs"
	
	
	==> kube-controller-manager [ff3a45c5df2e] <==
	I0917 09:23:40.484144       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:23:47.384512       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:24:00.119412       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:24:00.120017       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-232000-m02"
	I0917 09:24:00.126078       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:24:00.394549       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:24:07.557534       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:24:30.460160       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:24:30.474441       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:24:30.629928       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:24:30.630049       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-232000-m02"
	I0917 09:24:31.551221       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-232000-m02"
	I0917 09:24:31.551402       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-232000-m03\" does not exist"
	I0917 09:24:31.555743       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-232000-m03" podCIDRs=["10.244.4.0/24"]
	I0917 09:24:31.556212       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:24:31.556400       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:24:31.562010       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:24:31.993782       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:24:32.282056       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:24:35.469790       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:24:41.838627       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:24:49.879983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:24:49.880313       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-232000-m02"
	I0917 09:24:49.886867       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	I0917 09:24:50.411955       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-232000-m03"
	
	
	==> kube-proxy [70ffa012ff8d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 09:25:59.807500       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 09:25:59.819328       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.14"]
	E0917 09:25:59.819389       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 09:25:59.846779       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 09:25:59.846843       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 09:25:59.846859       1 server_linux.go:169] "Using iptables Proxier"
	I0917 09:25:59.848787       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 09:25:59.849242       1 server.go:483] "Version info" version="v1.31.1"
	I0917 09:25:59.849315       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:25:59.850535       1 config.go:199] "Starting service config controller"
	I0917 09:25:59.850891       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 09:25:59.851192       1 config.go:105] "Starting endpoint slice config controller"
	I0917 09:25:59.851289       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 09:25:59.853284       1 config.go:328] "Starting node config controller"
	I0917 09:25:59.853499       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 09:25:59.951420       1 shared_informer.go:320] Caches are synced for service config
	I0917 09:25:59.951979       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 09:25:59.953760       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [96e8ac7b181c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 09:22:01.945384       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 09:22:01.952979       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.14"]
	E0917 09:22:01.953033       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 09:22:02.040534       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 09:22:02.041036       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 09:22:02.041230       1 server_linux.go:169] "Using iptables Proxier"
	I0917 09:22:02.047843       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 09:22:02.048231       1 server.go:483] "Version info" version="v1.31.1"
	I0917 09:22:02.048344       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:22:02.049708       1 config.go:199] "Starting service config controller"
	I0917 09:22:02.049838       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 09:22:02.049964       1 config.go:105] "Starting endpoint slice config controller"
	I0917 09:22:02.050076       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 09:22:02.050484       1 config.go:328] "Starting node config controller"
	I0917 09:22:02.050580       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 09:22:02.150018       1 shared_informer.go:320] Caches are synced for service config
	I0917 09:22:02.151344       1 shared_informer.go:320] Caches are synced for node config
	I0917 09:22:02.151380       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [111a1121421e] <==
	I0917 09:25:57.262425       1 serving.go:386] Generated self-signed cert in-memory
	W0917 09:25:58.436121       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 09:25:58.436318       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 09:25:58.436460       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 09:25:58.436507       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 09:25:58.472592       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0917 09:25:58.474202       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:25:58.477318       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0917 09:25:58.477474       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 09:25:58.478524       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 09:25:58.477487       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 09:25:58.579098       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [5db9fa24f683] <==
	W0917 09:21:53.516620       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 09:21:53.518380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:21:53.516645       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 09:21:53.518477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:21:53.516676       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 09:21:53.518604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:21:53.516702       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 09:21:53.518701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:21:53.516716       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 09:21:53.518853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0917 09:21:53.516726       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	W0917 09:21:53.517421       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 09:21:53.519039       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0917 09:21:54.322831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 09:21:54.323062       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 09:21:54.331523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 09:21:54.331552       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:21:54.340213       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 09:21:54.340326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 09:21:54.403822       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 09:21:54.403879       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 09:21:54.639023       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 09:21:54.639067       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0917 09:21:55.019303       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0917 09:25:03.101941       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 09:26:06 multinode-232000 kubelet[1354]: E0917 09:26:06.544940    1354 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c990c87f-921e-45ba-845b-499147aaa1f9-config-volume podName:c990c87f-921e-45ba-845b-499147aaa1f9 nodeName:}" failed. No retries permitted until 2024-09-17 09:26:14.544922313 +0000 UTC m=+19.673765364 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c990c87f-921e-45ba-845b-499147aaa1f9-config-volume") pod "coredns-7c65d6cfc9-hr8rd" (UID: "c990c87f-921e-45ba-845b-499147aaa1f9") : object "kube-system"/"coredns" not registered
	Sep 17 09:26:06 multinode-232000 kubelet[1354]: E0917 09:26:06.646296    1354 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 17 09:26:06 multinode-232000 kubelet[1354]: E0917 09:26:06.646453    1354 projected.go:194] Error preparing data for projected volume kube-api-access-w2h5l for pod default/busybox-7dff88458-7npgw: object "default"/"kube-root-ca.crt" not registered
	Sep 17 09:26:06 multinode-232000 kubelet[1354]: E0917 09:26:06.646882    1354 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0bea4e6-f039-4023-9b32-1d309b2afbcd-kube-api-access-w2h5l podName:e0bea4e6-f039-4023-9b32-1d309b2afbcd nodeName:}" failed. No retries permitted until 2024-09-17 09:26:14.646589667 +0000 UTC m=+19.775432728 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-w2h5l" (UniqueName: "kubernetes.io/projected/e0bea4e6-f039-4023-9b32-1d309b2afbcd-kube-api-access-w2h5l") pod "busybox-7dff88458-7npgw" (UID: "e0bea4e6-f039-4023-9b32-1d309b2afbcd") : object "default"/"kube-root-ca.crt" not registered
	Sep 17 09:26:07 multinode-232000 kubelet[1354]: E0917 09:26:07.006695    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-hr8rd" podUID="c990c87f-921e-45ba-845b-499147aaa1f9"
	Sep 17 09:26:07 multinode-232000 kubelet[1354]: E0917 09:26:07.007039    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-7npgw" podUID="e0bea4e6-f039-4023-9b32-1d309b2afbcd"
	Sep 17 09:26:09 multinode-232000 kubelet[1354]: E0917 09:26:09.007159    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-hr8rd" podUID="c990c87f-921e-45ba-845b-499147aaa1f9"
	Sep 17 09:26:09 multinode-232000 kubelet[1354]: E0917 09:26:09.007491    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-7npgw" podUID="e0bea4e6-f039-4023-9b32-1d309b2afbcd"
	Sep 17 09:26:11 multinode-232000 kubelet[1354]: E0917 09:26:11.006673    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-hr8rd" podUID="c990c87f-921e-45ba-845b-499147aaa1f9"
	Sep 17 09:26:11 multinode-232000 kubelet[1354]: E0917 09:26:11.007586    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-7npgw" podUID="e0bea4e6-f039-4023-9b32-1d309b2afbcd"
	Sep 17 09:26:12 multinode-232000 kubelet[1354]: I0917 09:26:12.764948    1354 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Sep 17 09:26:30 multinode-232000 kubelet[1354]: I0917 09:26:30.459247    1354 scope.go:117] "RemoveContainer" containerID="f7ccad53a25746dfd98235cad31d332ce7c66650aabf4dc64e5bd22e676461e5"
	Sep 17 09:26:30 multinode-232000 kubelet[1354]: I0917 09:26:30.459430    1354 scope.go:117] "RemoveContainer" containerID="01991b1846976c59303ab99e6e160edac66cf3d1e203ed487c84a56bf9588948"
	Sep 17 09:26:30 multinode-232000 kubelet[1354]: E0917 09:26:30.459512    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(878f83a8-de4f-48b8-98ac-2d34171091ae)\"" pod="kube-system/storage-provisioner" podUID="878f83a8-de4f-48b8-98ac-2d34171091ae"
	Sep 17 09:26:45 multinode-232000 kubelet[1354]: I0917 09:26:45.007842    1354 scope.go:117] "RemoveContainer" containerID="01991b1846976c59303ab99e6e160edac66cf3d1e203ed487c84a56bf9588948"
	Sep 17 09:26:55 multinode-232000 kubelet[1354]: E0917 09:26:55.042083    1354 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 09:26:55 multinode-232000 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 09:26:55 multinode-232000 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 09:26:55 multinode-232000 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 09:26:55 multinode-232000 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 09:27:55 multinode-232000 kubelet[1354]: E0917 09:27:55.038035    1354 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 09:27:55 multinode-232000 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 09:27:55 multinode-232000 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 09:27:55 multinode-232000 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 09:27:55 multinode-232000 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-232000 -n multinode-232000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-232000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-q7wj6
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-232000 describe pod busybox-7dff88458-q7wj6
helpers_test.go:282: (dbg) kubectl --context multinode-232000 describe pod busybox-7dff88458-q7wj6:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-q7wj6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             multinode-232000-m03/
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cx86v (ro)
	Conditions:
	  Type           Status
	  PodScheduled   True 
	Volumes:
	  kube-api-access-cx86v:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  100s  default-scheduler  Successfully assigned default/busybox-7dff88458-q7wj6 to multinode-232000-m03

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (203.90s)

                                                
                                    
x
+
TestScheduledStopUnix (141.92s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-362000 --memory=2048 --driver=hyperkit 
E0917 02:33:59.142946    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-362000 --memory=2048 --driver=hyperkit : exit status 80 (2m16.590065544s)

                                                
                                                
-- stdout --
	* [scheduled-stop-362000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-362000" primary control-plane node in "scheduled-stop-362000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-362000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ee:6d:36:c:b3:4e
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-362000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ea:8f:cc:a2:2:8e
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ea:8f:cc:a2:2:8e
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-362000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-362000" primary control-plane node in "scheduled-stop-362000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-362000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ee:6d:36:c:b3:4e
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-362000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ea:8f:cc:a2:2:8e
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ea:8f:cc:a2:2:8e
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-17 02:36:14.518491 -0700 PDT m=+3520.640765428
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-362000 -n scheduled-stop-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-362000 -n scheduled-stop-362000: exit status 7 (79.392831ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 02:36:14.596069    6219 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0917 02:36:14.596090    6219 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-362000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-362000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-362000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-362000: (5.246911019s)
--- FAIL: TestScheduledStopUnix (141.92s)

                                                
                                    
x
+
TestPause/serial/Start (139.06s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-563000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-563000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : exit status 80 (2m18.980308315s)

                                                
                                                
-- stdout --
	* [pause-563000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "pause-563000" primary control-plane node in "pause-563000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "pause-563000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:a2:c7:6f:7b:50
	* Failed to start hyperkit VM. Running "minikube delete -p pause-563000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c6:79:ed:89:11:37
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c6:79:ed:89:11:37
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-563000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-563000 -n pause-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-563000 -n pause-563000: exit status 7 (80.557942ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 03:17:07.149858    8536 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0917 03:17:07.149878    8536 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-563000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/Start (139.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7201.809s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-258000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.1
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (59m19s)
		TestNetworkPlugins/group (12m7s)
		TestStartStop (21m0s)
		TestStartStop/group/default-k8s-diff-port (2m4s)
		TestStartStop/group/default-k8s-diff-port/serial (2m4s)
		TestStartStop/group/default-k8s-diff-port/serial/SecondStart (24s)
		TestStartStop/group/embed-certs (5m23s)
		TestStartStop/group/embed-certs/serial (5m23s)
		TestStartStop/group/embed-certs/serial/SecondStart (3m47s)

                                                
                                                
goroutine 4694 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 24 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc00092cd00, 0xc00006fbc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc000a00300, {0x686bf80, 0x2a, 0x2a}, {0x1f544d6?, 0xffffffffffffffff?, 0x688fe40?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc000a78aa0)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000a78aa0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:129 +0xa8

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0005f5600)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 1875 [chan send, 97 minutes]:
os/exec.(*Cmd).watchCtx(0xc0014c0d80, 0xc0018c2070)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1341
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 4144 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4143
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 169 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008bb9c0, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 167
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 168 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x52b06e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 167
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 15 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0xff
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 14
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x167

                                                
                                                
goroutine 1808 [chan send, 97 minutes]:
os/exec.(*Cmd).watchCtx(0xc0015ecf00, 0xc000b54f50)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1807
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3562 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3561
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3792 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x52ba080, 0xc000068310}, 0xc0018d3f50, 0xc0018d3f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x52ba080, 0xc000068310}, 0xb0?, 0xc0018d3f50, 0xc0018d3f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x52ba080?, 0xc000068310?}, 0x6d62696c205d3134?, 0x203a656e69686361?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x20de705?, 0xc0001fe180?, 0xc000b205b0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3801
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3692 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0007bcc80, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3674
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2761 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x52ba080, 0xc000068310}, 0xc0014def50, 0xc0014def98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x52ba080, 0xc000068310}, 0xc0?, 0xc0014def50, 0xc0014def98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x52ba080?, 0xc000068310?}, 0xc001a981a0?, 0x2093d00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000097fd0?, 0x20de764?, 0xc000b216c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2788
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4142 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0017f29d0, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0014dad80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x52d4b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0017f2a00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001d52b10, {0x5292e60, 0xc0020dfda0}, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001d52b10, 0x3b9aca00, 0x0, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4154
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3260 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc000a0c8c0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001a669c0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001a669c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001a669c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001a669c0, 0xc000a8a8c0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3258
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4400 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001cdcd80, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4395
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3913 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001b3ae10, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0016a1d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x52d4b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b3ae40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000502680, {0x5292e60, 0xc0015f3e90}, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000502680, 0x3b9aca00, 0x0, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3926
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3678 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0007bcbd0, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0013d6d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x52d4b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007bcc80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00190c910, {0x5292e60, 0xc001dedaa0}, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00190c910, 0x3b9aca00, 0x0, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3692
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2808 [chan receive, 22 minutes]:
testing.(*T).Run(0xc000629040, {0x3d27f94?, 0x20933f3?}, 0x5283df8)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc000629040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc000629040, 0x5283c80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 178 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0008bb910, 0x2d)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0013d5d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x52d4b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008bb9c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00029a010, {0x5292e60, 0xc000b60030}, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00029a010, 0x3b9aca00, 0x0, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 169
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 179 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x52ba080, 0xc000068310}, 0xc000a6ff50, 0xc000a6ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x52ba080, 0xc000068310}, 0x65?, 0xc000a6ff50, 0xc000a6ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x52ba080?, 0xc000068310?}, 0x2020202020202020?, 0x202020202020207c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x632d2d207c202020?, 0x72656e6961746e6f?, 0x656d69746e75722d?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 169
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 180 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 179
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3809 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3792
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3321 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x52b06e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3310
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1224 [IO wait, 101 minutes]:
internal/poll.runtime_pollWait(0x4dde27d8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc00090f700?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00090f700)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc00090f700)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc001aea140)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc001aea140)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc001a7e4b0, {0x52aca80, 0xc001aea140})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc001a7e4b0)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc001a66340?, 0xc001a66680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1221
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 3258 [chan receive, 2 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc001a664e0, 0x5283df8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 2808
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1478 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x52b06e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1370
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3264 [chan receive, 6 minutes]:
testing.(*T).Run(0xc001a67040, {0x3d29607?, 0x0?}, 0xc001c97a00)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001a67040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001a67040, 0xc000a8aa00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3258
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1769 [chan send, 97 minutes]:
os/exec.(*Cmd).watchCtx(0xc001d4b680, 0xc000b20150)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1768
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3680 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3679
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3561 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x52ba080, 0xc000068310}, 0xc0018d4f50, 0xc0014acf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x52ba080, 0xc000068310}, 0x6f?, 0xc0018d4f50, 0xc0018d4f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x52ba080?, 0xc000068310?}, 0x3a35383a66343a35?, 0x44492065663a3535?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x490a7d6637373539?, 0x3a33302037313930?, 0x35322e30303a3931?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3546
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4418 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x52ba080, 0xc000068310}, 0xc0013d2f50, 0xc0013d2f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x52ba080, 0xc000068310}, 0x0?, 0xc0013d2f50, 0xc0013d2f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x52ba080?, 0xc000068310?}, 0x2548cb6?, 0xc001805500?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001edcfd0?, 0x20de764?, 0xc001804600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4400
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3332 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3331
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2760 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc000a8a850, 0x1e)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000a6ed80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x52d4b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a8a880)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001eca000, {0x5292e60, 0xc00136a060}, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001eca000, 0x3b9aca00, 0x0, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2788
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4143 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x52ba080, 0xc000068310}, 0xc0018d5f50, 0xc0018d5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x52ba080, 0xc000068310}, 0x0?, 0xc0018d5f50, 0xc0018d5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x52ba080?, 0xc000068310?}, 0x2548cb6?, 0xc00174f680?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0018d5fd0?, 0x20de764?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4154
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3331 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x52ba080, 0xc000068310}, 0xc00132a750, 0xc000b4bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x52ba080, 0xc000068310}, 0xa0?, 0xc00132a750, 0xc00132a798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x52ba080?, 0xc000068310?}, 0xc001a98b60?, 0x2093d00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00132a7d0?, 0x20de764?, 0xc001ada9a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3322
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3424 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0007bce90, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0014d2d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x52d4b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007bcfc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00029a120, {0x5292e60, 0xc0014bc060}, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00029a120, 0x3b9aca00, 0x0, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3440
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1479 [chan receive, 99 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000677dc0, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1370
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4573 [syscall, 4 minutes]:
syscall.syscall6(0x4e409c68?, 0x90?, 0xc0016a3bf8?, 0x73055b8?, 0x90?, 0x1000001f59fc5?, 0x19?)
	/usr/local/go/src/runtime/sys_darwin.go:60 +0x78
syscall.wait4(0xc0016a3bb8?, 0x1f55ac5?, 0x90?, 0x51ebec0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xc0004f0700?, 0xc0016a3bec, 0xc001509590?, 0xc001aaf810?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).pidWait(0xc001f019c0)
	/usr/local/go/src/os/exec_unix.go:70 +0x86
os.(*Process).wait(0x1fa01b9?)
	/usr/local/go/src/os/exec_unix.go:30 +0x1b
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001b63200)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc001b63200)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc001a67520, 0xc001b63200)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x52b9e70, 0xc0004c4fc0}, 0xc001a67520, {0xc001c3df08, 0x12}, {0x2d3165a0013ebf58?, 0xc0013ebf60?}, {0x20933f3?, 0x1ff222f?}, {0xc0019c1700, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001a67520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001a67520, 0xc000b0e300)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4535
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2715 [chan receive, 60 minutes]:
testing.(*T).Run(0xc000628820, {0x3d27f94?, 0x3539a5b4f9a?}, 0xc001cb84c8)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc000628820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc000628820, 0x5283c38)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3691 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x52b06e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3674
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1464 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000677cd0, 0x28)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00145bd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x52d4b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000677dc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00190a170, {0x5292e60, 0xc000b61350}, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00190a170, 0x3b9aca00, 0x0, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1479
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3914 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x52ba080, 0xc000068310}, 0xc001edef50, 0xc001edef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x52ba080, 0xc000068310}, 0xe0?, 0xc001edef50, 0xc001edef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x52ba080?, 0xc000068310?}, 0xc001afe9c0?, 0x2093d00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001edefd0?, 0x20de764?, 0xc000b21ce0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3926
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3442 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3441
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4654 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4653
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2762 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2761
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3330 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001cdc790, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000a6bd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x52d4b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001cdc7c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a61440, {0x5292e60, 0xc0014b6ff0}, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000a61440, 0x3b9aca00, 0x0, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3322
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3791 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000a8add0, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0014cdd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x52d4b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a8ae00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001aae400, {0x5292e60, 0xc000b001e0}, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001aae400, 0x3b9aca00, 0x0, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3801
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4376 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x52b06e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4372
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3261 [chan receive, 2 minutes]:
testing.(*T).Run(0xc001a66b60, {0x3d29607?, 0x0?}, 0xc001d3cd00)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001a66b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001a66b60, 0xc000a8a900)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3258
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1914 [select, 97 minutes]:
net/http.(*persistConn).writeLoop(0xc0013fa7e0)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 1898
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 3441 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x52ba080, 0xc000068310}, 0xc000507f50, 0xc000507f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x52ba080, 0xc000068310}, 0x8?, 0xc000507f50, 0xc000507f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x52ba080?, 0xc000068310?}, 0xc001a669c0?, 0x2093d00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00029a030?, 0xc001a00000?, 0xc000507fa8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3440
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1466 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1465
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3926 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b3ae40, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3921
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2787 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x52b06e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2749
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2788 [chan receive, 60 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a8a880, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2749
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 1605 [chan send, 99 minutes]:
os/exec.(*Cmd).watchCtx(0xc001a19200, 0xc001976fc0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1604
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3940 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x52b06e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3939
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2799 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc000a0c8c0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1666 +0x5e5
testing.tRunner(0xc001a98000, 0xc001cb84c8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 2715
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4377 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001cdd0c0, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4372
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 1465 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x52ba080, 0xc000068310}, 0xc001329750, 0xc00145af98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x52ba080, 0xc000068310}, 0xb0?, 0xc001329750, 0xc001329798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x52ba080?, 0xc000068310?}, 0xc00152e1a0?, 0x2093d00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0013297d0?, 0x20de764?, 0xc0014d6cb0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1479
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3679 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x52ba080, 0xc000068310}, 0xc00132d750, 0xc000b4ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x52ba080, 0xc000068310}, 0x70?, 0xc00132d750, 0xc00132d798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x52ba080?, 0xc000068310?}, 0x2548cb6?, 0xc001660c00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00132d7d0?, 0x20de764?, 0xc001c2c070?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3692
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3546 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001aeab40, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3544
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4154 [chan receive, 14 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0017f2a00, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4149
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 1913 [select, 97 minutes]:
net/http.(*persistConn).readLoop(0xc0013fa7e0)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 1898
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 3440 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0007bcfc0, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3438
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3560 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001aeab10, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0013d9d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x52d4b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001aeab40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007cd900, {0x5292e60, 0xc001889bc0}, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007cd900, 0x3b9aca00, 0x0, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3546
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3439 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x52b06e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3438
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4180 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x52b06e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4207
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3941 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001cdc500, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3939
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4153 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x52b06e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4149
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4181 [chan receive, 14 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000626b80, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4207
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3545 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x52b06e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3544
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3322 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001cdc7c0, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3310
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3800 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x52b06e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3796
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3801 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a8ae00, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3796
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4417 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc001cdcd50, 0x1)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00130bd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x52d4b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001cdcd80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001eca210, {0x5292e60, 0xc001dec1b0}, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001eca210, 0x3b9aca00, 0x0, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4400
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4357 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x52ba080, 0xc000068310}, 0xc0013e8f50, 0xc0013e8f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x52ba080, 0xc000068310}, 0xd0?, 0xc0013e8f50, 0xc0013e8f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x52ba080?, 0xc000068310?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x20de705?, 0xc000207e00?, 0xc000b544d0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4377
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3932 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3931
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4710 [IO wait]:
internal/poll.runtime_pollWait(0x4dde1c80, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0013ba660?, 0xc00159a481?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0013ba660, {0xc00159a481, 0xbb7f, 0xbb7f})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00189a970, {0xc00159a481?, 0xc0013ecd50?, 0xfe2a?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00160d890, {0x5291658, 0xc001d20880})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x52917e0, 0xc00160d890}, {0x5291658, 0xc001d20880}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0013ece78?, {0x52917e0, 0xc00160d890})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x682c350?, {0x52917e0?, 0xc00160d890?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x52917e0, 0xc00160d890}, {0x5291740, 0xc00189a970}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc000069c70?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4708
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 3925 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x52b06e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3921
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3931 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x52ba080, 0xc000068310}, 0xc0018d3750, 0xc0018d3798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x52ba080, 0xc000068310}, 0x0?, 0xc0018d3750, 0xc0018d3798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x52ba080?, 0xc000068310?}, 0x2548cb6?, 0xc00174e600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00174e600?, 0x2546205?, 0xc001602d80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3941
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3930 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001cdc4d0, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0013d8d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x52d4b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001cdc500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008146c0, {0x5292e60, 0xc001d1f860}, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008146c0, 0x3b9aca00, 0x0, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3941
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3915 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3914
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4358 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4357
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4419 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4418
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4213 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4212
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4212 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x52ba080, 0xc000068310}, 0xc001edd750, 0xc001edd798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x52ba080, 0xc000068310}, 0xa0?, 0xc001edd750, 0xc001edd798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x52ba080?, 0xc000068310?}, 0xc001a99d40?, 0x2093d00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001edd7d0?, 0x20de764?, 0xc0014d62a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4181
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4211 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000626a90, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0014dfd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x52d4b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000626b80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007e5ae0, {0x5292e60, 0xc00136bbc0}, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007e5ae0, 0x3b9aca00, 0x0, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4181
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4356 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001cdd090, 0xf)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0016a2d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x52d4b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001cdd0c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b1f730, {0x5292e60, 0xc0014b6d20}, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000b1f730, 0x3b9aca00, 0x0, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4377
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4399 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x52b06e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4395
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4653 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x52ba080, 0xc000068310}, 0xc000506750, 0xc000506798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x52ba080, 0xc000068310}, 0x0?, 0xc000506750, 0xc000506798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x52ba080?, 0xc000068310?}, 0x2548cb6?, 0xc001660c00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0005067d0?, 0x20de764?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4661
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4635 [chan receive, 2 minutes]:
testing.(*T).Run(0xc001a671e0, {0x3d35302?, 0xc001792a80?}, 0xc001c96800)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001a671e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001a671e0, 0xc001d3cd00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3261
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4661 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b3ba00, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4640
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4574 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x4dde1b78, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001a1ff20?, 0xc00134cc97?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001a1ff20, {0xc00134cc97, 0x369, 0x369})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001d20308, {0xc00134cc97?, 0x20dc807?, 0x22e?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0007d2d80, {0x5291658, 0xc00189a350})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x52917e0, 0xc0007d2d80}, {0x5291658, 0xc00189a350}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x679d880?, {0x52917e0, 0xc0007d2d80})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x682c350?, {0x52917e0?, 0xc0007d2d80?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x52917e0, 0xc0007d2d80}, {0x5291740, 0xc001d20308}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc000b0e300?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4573
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 4652 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001b3b9d0, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0018d0d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x52d4b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b3ba00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b1ee60, {0x5292e60, 0xc0014bd710}, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000b1ee60, 0x3b9aca00, 0x0, 0x1, 0xc000068310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4661
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4711 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc001c7f800, 0xc000069f10)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 4708
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 4576 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b63200, 0xc000b21f10)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 4573
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 4575 [IO wait]:
internal/poll.runtime_pollWait(0x4dde22b0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000b343c0?, 0xc001e40b6f?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000b343c0, {0xc001e40b6f, 0x1d491, 0x1d491})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001d20320, {0xc001e40b6f?, 0x1fb31e5?, 0x1fe8a?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0007d2db0, {0x5291658, 0xc00189a358})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x52917e0, 0xc0007d2db0}, {0x5291658, 0xc00189a358}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x52cb5c0?, {0x52917e0, 0xc0007d2db0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x682c350?, {0x52917e0?, 0xc0007d2db0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x52917e0, 0xc0007d2db0}, {0x5291740, 0xc001d20320}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc00214f0d0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4573
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 4660 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x52b06e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4640
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4535 [chan receive, 4 minutes]:
testing.(*T).Run(0xc001a99860, {0x3d35302?, 0xc001917340?}, 0xc000b0e300)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001a99860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001a99860, 0xc001c97a00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3264
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4708 [syscall, 2 minutes]:
syscall.syscall6(0x4e008768?, 0x90?, 0xc0014a7bf8?, 0x7305108?, 0x90?, 0x1000001f59fc5?, 0x19?)
	/usr/local/go/src/runtime/sys_darwin.go:60 +0x78
syscall.wait4(0xc0014a7bb8?, 0x1f55ac5?, 0x90?, 0x51ebec0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xc0004efdc0?, 0xc0014a7bec, 0xc001d31800?, 0xc00029b250?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).pidWait(0xc000829100)
	/usr/local/go/src/os/exec_unix.go:70 +0x86
os.(*Process).wait(0x1fa01b9?)
	/usr/local/go/src/os/exec_unix.go:30 +0x1b
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001c7f800)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc001c7f800)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc001afe9c0, 0xc001c7f800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x52b9e70, 0xc00054d730}, 0xc001afe9c0, {0xc001c42f40, 0x1c}, {0x2d6206d801329758?, 0xc001329760?}, {0x20933f3?, 0x1ff222f?}, {0xc000a76b00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001afe9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001afe9c0, 0xc001c96800)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4635
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4709 [IO wait]:
internal/poll.runtime_pollWait(0x4e44fb70, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0013ba5a0?, 0xc00169028a?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0013ba5a0, {0xc00169028a, 0x576, 0x576})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00189a958, {0xc00169028a?, 0x4ddce8f8?, 0x20d?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00160d860, {0x5291658, 0xc001d20878})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x52917e0, 0xc00160d860}, {0x5291658, 0xc001d20878}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x679d880?, {0x52917e0, 0xc00160d860})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x682c350?, {0x52917e0?, 0xc00160d860?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x52917e0, 0xc00160d860}, {0x5291740, 0xc00189a958}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001c96800?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4708
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                    

Test pass (181/219)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 19.49
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.31.1/json-events 8.8
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.3
18 TestDownloadOnly/v1.31.1/DeleteAll 0.27
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.21
21 TestBinaryMirror 0.98
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
27 TestAddons/Setup 210.5
29 TestAddons/serial/Volcano 41.64
31 TestAddons/serial/GCPAuth/Namespaces 0.1
34 TestAddons/parallel/Ingress 20.15
35 TestAddons/parallel/InspektorGadget 10.47
36 TestAddons/parallel/MetricsServer 5.48
37 TestAddons/parallel/HelmTiller 9.97
39 TestAddons/parallel/CSI 52.92
40 TestAddons/parallel/Headlamp 18.52
41 TestAddons/parallel/CloudSpanner 6.41
42 TestAddons/parallel/LocalPath 53.82
43 TestAddons/parallel/NvidiaDevicePlugin 5.31
44 TestAddons/parallel/Yakd 11.45
45 TestAddons/StoppedEnableDisable 5.93
53 TestHyperKitDriverInstallOrUpdate 9.09
56 TestErrorSpam/setup 37.66
57 TestErrorSpam/start 1.69
58 TestErrorSpam/status 0.53
59 TestErrorSpam/pause 1.35
60 TestErrorSpam/unpause 1.45
61 TestErrorSpam/stop 153.84
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 79.77
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 37.02
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.05
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.32
73 TestFunctional/serial/CacheCmd/cache/add_local 1.36
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.1
75 TestFunctional/serial/CacheCmd/cache/list 0.08
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.08
78 TestFunctional/serial/CacheCmd/cache/delete 0.16
79 TestFunctional/serial/MinikubeKubectlCmd 1.22
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.61
81 TestFunctional/serial/ExtraConfig 40.62
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 2.8
84 TestFunctional/serial/LogsFileCmd 2.59
85 TestFunctional/serial/InvalidService 4.43
87 TestFunctional/parallel/ConfigCmd 0.5
88 TestFunctional/parallel/DashboardCmd 15.18
89 TestFunctional/parallel/DryRun 1.79
90 TestFunctional/parallel/InternationalLanguage 0.62
91 TestFunctional/parallel/StatusCmd 0.49
95 TestFunctional/parallel/ServiceCmdConnect 7.56
96 TestFunctional/parallel/AddonsCmd 0.23
97 TestFunctional/parallel/PersistentVolumeClaim 27.53
99 TestFunctional/parallel/SSHCmd 0.29
100 TestFunctional/parallel/CpCmd 0.96
101 TestFunctional/parallel/MySQL 26.18
102 TestFunctional/parallel/FileSync 0.24
103 TestFunctional/parallel/CertSync 1.1
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.22
111 TestFunctional/parallel/License 0.65
112 TestFunctional/parallel/Version/short 0.1
113 TestFunctional/parallel/Version/components 0.51
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.16
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.16
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.15
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.15
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.52
119 TestFunctional/parallel/ImageCommands/Setup 1.86
120 TestFunctional/parallel/DockerEnv/bash 0.61
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.02
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.62
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.4
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.25
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.32
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.48
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.46
131 TestFunctional/parallel/ServiceCmd/DeployApp 23.16
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.36
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.13
137 TestFunctional/parallel/ServiceCmd/List 0.38
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.38
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.25
140 TestFunctional/parallel/ServiceCmd/Format 0.25
141 TestFunctional/parallel/ServiceCmd/URL 0.27
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.14
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.25
149 TestFunctional/parallel/ProfileCmd/profile_list 0.25
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
151 TestFunctional/parallel/MountCmd/any-port 8.02
152 TestFunctional/parallel/MountCmd/specific-port 1.57
153 TestFunctional/parallel/MountCmd/VerifyCleanup 2.34
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 194.97
161 TestMultiControlPlane/serial/DeployApp 6.71
162 TestMultiControlPlane/serial/PingHostFromPods 1.28
163 TestMultiControlPlane/serial/AddWorkerNode 53.02
164 TestMultiControlPlane/serial/NodeLabels 0.05
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.34
166 TestMultiControlPlane/serial/CopyFile 9.09
167 TestMultiControlPlane/serial/StopSecondaryNode 8.76
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.27
169 TestMultiControlPlane/serial/RestartSecondaryNode 43.19
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.35
181 TestImageBuild/serial/Setup 40.15
182 TestImageBuild/serial/NormalBuild 1.76
183 TestImageBuild/serial/BuildWithBuildArg 0.82
184 TestImageBuild/serial/BuildWithDockerIgnore 0.66
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.54
189 TestJSONOutput/start/Command 82.86
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.48
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.45
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 8.32
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.74
217 TestMainNoArgs 0.08
218 TestMinikubeProfile 86.98
224 TestMultiNode/serial/FreshStart2Nodes 105.99
225 TestMultiNode/serial/DeployApp2Nodes 4.97
226 TestMultiNode/serial/PingHostFrom2Pods 0.9
227 TestMultiNode/serial/AddNode 48.34
228 TestMultiNode/serial/MultiNodeLabels 0.05
229 TestMultiNode/serial/ProfileList 0.17
230 TestMultiNode/serial/CopyFile 5.23
231 TestMultiNode/serial/StopNode 2.83
232 TestMultiNode/serial/StartAfterStop 41.59
234 TestMultiNode/serial/DeleteNode 11.12
235 TestMultiNode/serial/StopMultiNode 16.8
236 TestMultiNode/serial/RestartMultiNode 107.72
237 TestMultiNode/serial/ValidateNameConflict 42.1
241 TestPreload 150.53
244 TestSkaffold 114.76
247 TestRunningBinaryUpgrade 104.64
249 TestKubernetesUpgrade 1326.05
262 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.17
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.92
264 TestStoppedBinaryUpgrade/Setup 1.44
265 TestStoppedBinaryUpgrade/Upgrade 119.95
268 TestStoppedBinaryUpgrade/MinikubeLogs 2.49
277 TestNoKubernetes/serial/StartNoK8sWithVersion 0.47
278 TestNoKubernetes/serial/StartWithK8s 74.32
280 TestNoKubernetes/serial/StartWithStopK8s 8.73
281 TestNoKubernetes/serial/Start 18.78
282 TestNoKubernetes/serial/VerifyK8sNotRunning 0.13
283 TestNoKubernetes/serial/ProfileList 0.53
284 TestNoKubernetes/serial/Stop 2.37
287 TestNoKubernetes/serial/StartNoArgs 19.43
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (19.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-222000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-222000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (19.490618495s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (19.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-222000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-222000: exit status 85 (293.612699ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-222000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT |          |
	|         | -p download-only-222000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 01:37:33
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 01:37:33.820527    1562 out.go:345] Setting OutFile to fd 1 ...
	I0917 01:37:33.820742    1562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:37:33.820748    1562 out.go:358] Setting ErrFile to fd 2...
	I0917 01:37:33.820753    1562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:37:33.820926    1562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	W0917 01:37:33.821031    1562 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19648-1025/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19648-1025/.minikube/config/config.json: no such file or directory
	I0917 01:37:33.823685    1562 out.go:352] Setting JSON to true
	I0917 01:37:33.846632    1562 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":423,"bootTime":1726561830,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 01:37:33.846727    1562 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 01:37:33.869509    1562 out.go:97] [download-only-222000] minikube v1.34.0 on Darwin 14.6.1
	I0917 01:37:33.869706    1562 notify.go:220] Checking for updates...
	W0917 01:37:33.869715    1562 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball: no such file or directory
	I0917 01:37:33.889346    1562 out.go:169] MINIKUBE_LOCATION=19648
	I0917 01:37:33.910686    1562 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 01:37:33.932519    1562 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 01:37:33.953372    1562 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:37:33.974454    1562 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	W0917 01:37:34.016602    1562 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 01:37:34.017099    1562 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 01:37:34.070421    1562 out.go:97] Using the hyperkit driver based on user configuration
	I0917 01:37:34.070503    1562 start.go:297] selected driver: hyperkit
	I0917 01:37:34.070516    1562 start.go:901] validating driver "hyperkit" against <nil>
	I0917 01:37:34.070699    1562 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:37:34.071138    1562 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19648-1025/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 01:37:34.475761    1562 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 01:37:34.480604    1562 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:37:34.480622    1562 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 01:37:34.480647    1562 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 01:37:34.484791    1562 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0917 01:37:34.484954    1562 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 01:37:34.484987    1562 cni.go:84] Creating CNI manager for ""
	I0917 01:37:34.485036    1562 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0917 01:37:34.485101    1562 start.go:340] cluster config:
	{Name:download-only-222000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-222000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:37:34.485310    1562 iso.go:125] acquiring lock: {Name:mkc407d80a0f8e78f0e24d63c464bb315c80139b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:37:34.506712    1562 out.go:97] Downloading VM boot image ...
	I0917 01:37:34.506819    1562 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0917 01:37:42.642830    1562 out.go:97] Starting "download-only-222000" primary control-plane node in "download-only-222000" cluster
	I0917 01:37:42.642869    1562 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 01:37:42.699511    1562 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0917 01:37:42.699535    1562 cache.go:56] Caching tarball of preloaded images
	I0917 01:37:42.699896    1562 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 01:37:42.720585    1562 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0917 01:37:42.720612    1562 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0917 01:37:42.798825    1562 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-222000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-222000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-222000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (8.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-405000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-405000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=hyperkit : (8.795927435s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (8.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-405000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-405000: exit status 85 (298.650106ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-222000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT |                     |
	|         | -p download-only-222000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT | 17 Sep 24 01:37 PDT |
	| delete  | -p download-only-222000        | download-only-222000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT | 17 Sep 24 01:37 PDT |
	| start   | -o=json --download-only        | download-only-405000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT |                     |
	|         | -p download-only-405000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 01:37:54
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 01:37:54.061738    1588 out.go:345] Setting OutFile to fd 1 ...
	I0917 01:37:54.061954    1588 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:37:54.061960    1588 out.go:358] Setting ErrFile to fd 2...
	I0917 01:37:54.061964    1588 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:37:54.062161    1588 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 01:37:54.063713    1588 out.go:352] Setting JSON to true
	I0917 01:37:54.086913    1588 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":444,"bootTime":1726561830,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 01:37:54.087059    1588 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 01:37:54.108893    1588 out.go:97] [download-only-405000] minikube v1.34.0 on Darwin 14.6.1
	I0917 01:37:54.109103    1588 notify.go:220] Checking for updates...
	I0917 01:37:54.130642    1588 out.go:169] MINIKUBE_LOCATION=19648
	I0917 01:37:54.151772    1588 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 01:37:54.172854    1588 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 01:37:54.193699    1588 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:37:54.214800    1588 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	W0917 01:37:54.256611    1588 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 01:37:54.257055    1588 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 01:37:54.287838    1588 out.go:97] Using the hyperkit driver based on user configuration
	I0917 01:37:54.287888    1588 start.go:297] selected driver: hyperkit
	I0917 01:37:54.287900    1588 start.go:901] validating driver "hyperkit" against <nil>
	I0917 01:37:54.288097    1588 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:37:54.288415    1588 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19648-1025/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0917 01:37:54.298439    1588 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0917 01:37:54.302614    1588 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:37:54.302640    1588 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0917 01:37:54.302666    1588 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 01:37:54.305540    1588 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0917 01:37:54.305705    1588 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 01:37:54.305738    1588 cni.go:84] Creating CNI manager for ""
	I0917 01:37:54.305786    1588 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 01:37:54.305803    1588 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 01:37:54.305869    1588 start.go:340] cluster config:
	{Name:download-only-405000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-405000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:37:54.305958    1588 iso.go:125] acquiring lock: {Name:mkc407d80a0f8e78f0e24d63c464bb315c80139b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:37:54.326916    1588 out.go:97] Starting "download-only-405000" primary control-plane node in "download-only-405000" cluster
	I0917 01:37:54.326953    1588 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 01:37:54.383633    1588 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 01:37:54.383679    1588 cache.go:56] Caching tarball of preloaded images
	I0917 01:37:54.384134    1588 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 01:37:54.405553    1588 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0917 01:37:54.405573    1588 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0917 01:37:54.485481    1588 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /Users/jenkins/minikube-integration/19648-1025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-405000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-405000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-405000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (0.98s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-670000 --alsologtostderr --binary-mirror http://127.0.0.1:49640 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-670000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-670000
--- PASS: TestBinaryMirror (0.98s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-190000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-190000: exit status 85 (192.392753ms)

                                                
                                                
-- stdout --
	* Profile "addons-190000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-190000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-190000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-190000: exit status 85 (213.323138ms)

                                                
                                                
-- stdout --
	* Profile "addons-190000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-190000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (210.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-190000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-190000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m30.500180939s)
--- PASS: TestAddons/Setup (210.50s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.64s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 12.071441ms
addons_test.go:913: volcano-controller stabilized in 12.117357ms
addons_test.go:897: volcano-scheduler stabilized in 12.138117ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-jqj8b" [39f496a2-4756-46e7-aa16-f90f225eebe5] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00354861s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-ctvwr" [c0eb5c46-e365-4732-8d67-611362930084] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003511598s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-n9rkn" [a0d3cb1c-749c-40fc-bbad-f06f4a48951f] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0042561s
addons_test.go:932: (dbg) Run:  kubectl --context addons-190000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-190000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-190000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [06ea7714-deec-4fb5-82ac-60ba592797aa] Pending
helpers_test.go:344: "test-job-nginx-0" [06ea7714-deec-4fb5-82ac-60ba592797aa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [06ea7714-deec-4fb5-82ac-60ba592797aa] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.002933188s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 -p addons-190000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-amd64 -p addons-190000 addons disable volcano --alsologtostderr -v=1: (10.332903153s)
--- PASS: TestAddons/serial/Volcano (41.64s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-190000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-190000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-190000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-190000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-190000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ca7671f3-8586-4c3d-8ec2-cfcf093f63cc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ca7671f3-8586-4c3d-8ec2-cfcf093f63cc] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004658828s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-190000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-190000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p addons-190000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p addons-190000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p addons-190000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p addons-190000 addons disable ingress --alsologtostderr -v=1: (7.437001453s)
--- PASS: TestAddons/parallel/Ingress (20.15s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.47s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-smz2v" [40716852-62c7-48dd-ae8b-58ac82a9bf37] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00299934s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-190000
addons_test.go:851: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-190000: (5.468261285s)
--- PASS: TestAddons/parallel/InspektorGadget (10.47s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.837546ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-zzkrt" [ce680a4d-91c5-4b38-9c62-9e832f1427c0] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003545674s
addons_test.go:417: (dbg) Run:  kubectl --context addons-190000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-190000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.48s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.97s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.724183ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-x9frd" [ed4417f5-1c3f-4365-8f29-a33369722c59] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005449195s
addons_test.go:475: (dbg) Run:  kubectl --context addons-190000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-190000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.558936237s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-190000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.97s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 3.805821ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-190000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-190000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4e4c34a3-1d80-45da-8465-970c7ee55eec] Pending
helpers_test.go:344: "task-pv-pod" [4e4c34a3-1d80-45da-8465-970c7ee55eec] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4e4c34a3-1d80-45da-8465-970c7ee55eec] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003950523s
addons_test.go:590: (dbg) Run:  kubectl --context addons-190000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-190000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-190000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-190000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-190000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-190000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-190000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [89865d8a-d9ed-4b85-b722-a4d012cf444a] Pending
helpers_test.go:344: "task-pv-pod-restore" [89865d8a-d9ed-4b85-b722-a4d012cf444a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [89865d8a-d9ed-4b85-b722-a4d012cf444a] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00362572s
addons_test.go:632: (dbg) Run:  kubectl --context addons-190000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-190000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-190000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-190000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-amd64 -p addons-190000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.452759274s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-amd64 -p addons-190000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-190000 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-190000 --alsologtostderr -v=1: (1.060659703s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-gsw25" [25ab5c71-7e30-4e28-ae76-d0efe86fa6c0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-gsw25" [25ab5c71-7e30-4e28-ae76-d0efe86fa6c0] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004031145s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-amd64 -p addons-190000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-amd64 -p addons-190000 addons disable headlamp --alsologtostderr -v=1: (5.459005006s)
--- PASS: TestAddons/parallel/Headlamp (18.52s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-wwtnh" [e0f1fdbc-cb73-4ff4-9c04-207128eea6fb] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003634678s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-190000
--- PASS: TestAddons/parallel/CloudSpanner (6.41s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.82s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-190000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-190000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-190000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [19d5fb77-7d3c-4962-ba93-52ae7be5b5dc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [19d5fb77-7d3c-4962-ba93-52ae7be5b5dc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [19d5fb77-7d3c-4962-ba93-52ae7be5b5dc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004052257s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-190000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-amd64 -p addons-190000 ssh "cat /opt/local-path-provisioner/pvc-504e0720-4f81-475b-a09a-542324f00b19_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-190000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-190000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 -p addons-190000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-amd64 -p addons-190000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.188671115s)
--- PASS: TestAddons/parallel/LocalPath (53.82s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nzgpj" [76589ce8-f1e9-4d47-98e3-18f0b6b25a2d] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004822421s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-190000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.31s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-gzxz2" [50f4022c-8510-4323-aec6-bd613f0f28ae] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005687811s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-amd64 -p addons-190000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-amd64 -p addons-190000 addons disable yakd --alsologtostderr -v=1: (5.448220525s)
--- PASS: TestAddons/parallel/Yakd (11.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.93s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-190000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-190000: (5.378118438s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-190000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-190000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-190000
--- PASS: TestAddons/StoppedEnableDisable (5.93s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.09s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.09s)

                                                
                                    
x
+
TestErrorSpam/setup (37.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-940000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-940000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 --driver=hyperkit : (37.656371341s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (37.66s)

                                                
                                    
x
+
TestErrorSpam/start (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-940000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-940000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-940000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 start --dry-run
--- PASS: TestErrorSpam/start (1.69s)

                                                
                                    
x
+
TestErrorSpam/status (0.53s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-940000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-940000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-940000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 status
--- PASS: TestErrorSpam/status (0.53s)

                                                
                                    
x
+
TestErrorSpam/pause (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-940000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-940000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-940000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 pause
--- PASS: TestErrorSpam/pause (1.35s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-940000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-940000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-940000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 unpause
--- PASS: TestErrorSpam/unpause (1.45s)

                                                
                                    
x
+
TestErrorSpam/stop (153.84s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-940000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-940000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 stop: (3.387875458s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-940000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-940000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 stop: (1m15.228106912s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-940000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-940000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-940000 stop: (1m15.223938952s)
--- PASS: TestErrorSpam/stop (153.84s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19648-1025/.minikube/files/etc/test/nested/copy/1560/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.77s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-965000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0917 01:56:35.739657    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:35.748008    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:35.759576    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:35.781365    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:35.824076    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:35.906603    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:36.069278    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:36.391318    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:37.033330    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:38.314630    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:40.877580    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:45.999352    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:56.241223    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:57:16.723969    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-amd64 start -p functional-965000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m19.768526458s)
--- PASS: TestFunctional/serial/StartWithProxy (79.77s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.02s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-965000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-amd64 start -p functional-965000 --alsologtostderr -v=8: (37.022210806s)
functional_test.go:663: soft start took 37.022647351s for "functional-965000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.02s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-965000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-965000 cache add registry.k8s.io/pause:3.1: (1.211430872s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 cache add registry.k8s.io/pause:3.3
E0917 01:57:57.687177    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-965000 cache add registry.k8s.io/pause:3.3: (1.137978858s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-965000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local936086141/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 cache add minikube-local-cache-test:functional-965000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 cache delete minikube-local-cache-test:functional-965000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-965000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-965000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (141.866969ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 kubectl -- --context functional-965000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-amd64 -p functional-965000 kubectl -- --context functional-965000 get pods: (1.222520428s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-965000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-965000 get pods: (1.608854429s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.61s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-965000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-amd64 start -p functional-965000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.623810366s)
functional_test.go:761: restart took 40.62396985s for "functional-965000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.62s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-965000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 logs
functional_test.go:1236: (dbg) Done: out/minikube-darwin-amd64 -p functional-965000 logs: (2.796090388s)
--- PASS: TestFunctional/serial/LogsCmd (2.80s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1263529089/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-darwin-amd64 -p functional-965000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1263529089/001/logs.txt: (2.593314236s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.59s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.43s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-965000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-965000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-965000: exit status 115 (263.943111ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:30913 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-965000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-965000 delete -f testdata/invalidsvc.yaml: (1.034476053s)
--- PASS: TestFunctional/serial/InvalidService (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-965000 config get cpus: exit status 14 (68.785297ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-965000 config get cpus: exit status 14 (56.065744ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-965000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-965000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3033: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.18s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-965000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-965000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (687.281364ms)

                                                
                                                
-- stdout --
	* [functional-965000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 01:59:55.496828    2938 out.go:345] Setting OutFile to fd 1 ...
	I0917 01:59:55.497102    2938 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:59:55.497106    2938 out.go:358] Setting ErrFile to fd 2...
	I0917 01:59:55.497110    2938 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:59:55.497948    2938 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 01:59:55.499811    2938 out.go:352] Setting JSON to false
	I0917 01:59:55.523424    2938 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1765,"bootTime":1726561830,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 01:59:55.523590    2938 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 01:59:55.545193    2938 out.go:177] * [functional-965000] minikube v1.34.0 on Darwin 14.6.1
	I0917 01:59:55.587211    2938 notify.go:220] Checking for updates...
	I0917 01:59:55.607787    2938 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 01:59:55.627950    2938 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 01:59:55.649055    2938 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 01:59:55.707156    2938 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:59:55.749012    2938 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 01:59:55.770158    2938 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:59:55.791982    2938 config.go:182] Loaded profile config "functional-965000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 01:59:55.792657    2938 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:59:55.792726    2938 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:59:55.802356    2938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50884
	I0917 01:59:55.802768    2938 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:59:55.803190    2938 main.go:141] libmachine: Using API Version  1
	I0917 01:59:55.803204    2938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:59:55.803429    2938 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:59:55.803592    2938 main.go:141] libmachine: (functional-965000) Calling .DriverName
	I0917 01:59:55.803785    2938 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 01:59:55.804071    2938 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:59:55.804097    2938 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:59:55.813125    2938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50886
	I0917 01:59:55.813595    2938 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:59:55.814056    2938 main.go:141] libmachine: Using API Version  1
	I0917 01:59:55.814073    2938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:59:55.814372    2938 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:59:55.814548    2938 main.go:141] libmachine: (functional-965000) Calling .DriverName
	I0917 01:59:55.843921    2938 out.go:177] * Using the hyperkit driver based on existing profile
	I0917 01:59:55.918114    2938 start.go:297] selected driver: hyperkit
	I0917 01:59:55.918141    2938 start.go:901] validating driver "hyperkit" against &{Name:functional-965000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:functional-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:59:55.918344    2938 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:59:55.944913    2938 out.go:201] 
	W0917 01:59:55.965989    2938 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0917 01:59:56.024150    2938 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-965000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
functional_test.go:991: (dbg) Done: out/minikube-darwin-amd64 start -p functional-965000 --dry-run --alsologtostderr -v=1 --driver=hyperkit : (1.106495199s)
--- PASS: TestFunctional/parallel/DryRun (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-965000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-965000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (619.141019ms)

                                                
                                                
-- stdout --
	* [functional-965000] minikube v1.34.0 sur Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 01:59:57.285681    2985 out.go:345] Setting OutFile to fd 1 ...
	I0917 01:59:57.285839    2985 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:59:57.285844    2985 out.go:358] Setting ErrFile to fd 2...
	I0917 01:59:57.285847    2985 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:59:57.286057    2985 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 01:59:57.287563    2985 out.go:352] Setting JSON to false
	I0917 01:59:57.312012    2985 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1767,"bootTime":1726561830,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0917 01:59:57.312127    2985 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 01:59:57.333381    2985 out.go:177] * [functional-965000] minikube v1.34.0 sur Darwin 14.6.1
	I0917 01:59:57.374806    2985 notify.go:220] Checking for updates...
	I0917 01:59:57.394896    2985 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 01:59:57.415900    2985 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	I0917 01:59:57.458199    2985 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0917 01:59:57.499892    2985 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:59:57.542082    2985 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	I0917 01:59:57.584028    2985 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:59:57.605502    2985 config.go:182] Loaded profile config "functional-965000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 01:59:57.605970    2985 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:59:57.606032    2985 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:59:57.615547    2985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50933
	I0917 01:59:57.615891    2985 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:59:57.616311    2985 main.go:141] libmachine: Using API Version  1
	I0917 01:59:57.616320    2985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:59:57.616535    2985 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:59:57.616641    2985 main.go:141] libmachine: (functional-965000) Calling .DriverName
	I0917 01:59:57.616835    2985 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 01:59:57.617125    2985 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 01:59:57.617152    2985 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 01:59:57.625543    2985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50935
	I0917 01:59:57.625909    2985 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:59:57.626241    2985 main.go:141] libmachine: Using API Version  1
	I0917 01:59:57.626250    2985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:59:57.626457    2985 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:59:57.626564    2985 main.go:141] libmachine: (functional-965000) Calling .DriverName
	I0917 01:59:57.654893    2985 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0917 01:59:57.697056    2985 start.go:297] selected driver: hyperkit
	I0917 01:59:57.697087    2985 start.go:901] validating driver "hyperkit" against &{Name:functional-965000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:functional-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:59:57.697320    2985 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:59:57.722908    2985 out.go:201] 
	W0917 01:59:57.743860    2985 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 01:59:57.780875    2985 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-965000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-965000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-ctmhp" [5c8bba67-a541-4c7b-9a38-32e98fa3216f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-ctmhp" [5c8bba67-a541-4c7b-9a38-32e98fa3216f] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003751067s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.169.0.4:31445
functional_test.go:1675: http://192.169.0.4:31445: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-ctmhp

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:31445
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ba323ca2-4a79-40a0-b9c4-dcf71cda9415] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005414204s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-965000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-965000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-965000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-965000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4397bd1f-47ff-414b-8940-e2d490839302] Pending
helpers_test.go:344: "sp-pod" [4397bd1f-47ff-414b-8940-e2d490839302] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4397bd1f-47ff-414b-8940-e2d490839302] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.002846036s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-965000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-965000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-965000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [265f4de2-43e7-463c-ae7a-e50259988213] Pending
helpers_test.go:344: "sp-pod" [265f4de2-43e7-463c-ae7a-e50259988213] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [265f4de2-43e7-463c-ae7a-e50259988213] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.002797254s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-965000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.53s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh -n functional-965000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 cp functional-965000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd666465490/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh -n functional-965000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh -n functional-965000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-965000 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-hh85h" [85bac531-ee42-409b-a3f4-de5e8b547183] Pending
helpers_test.go:344: "mysql-6cdb49bbb-hh85h" [85bac531-ee42-409b-a3f4-de5e8b547183] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-hh85h" [85bac531-ee42-409b-a3f4-de5e8b547183] Running
E0917 01:59:19.609995    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.005521416s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-965000 exec mysql-6cdb49bbb-hh85h -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-965000 exec mysql-6cdb49bbb-hh85h -- mysql -ppassword -e "show databases;": exit status 1 (128.165045ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-965000 exec mysql-6cdb49bbb-hh85h -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-965000 exec mysql-6cdb49bbb-hh85h -- mysql -ppassword -e "show databases;": exit status 1 (105.418732ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-965000 exec mysql-6cdb49bbb-hh85h -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.18s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1560/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "sudo cat /etc/test/nested/copy/1560/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1560.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "sudo cat /etc/ssl/certs/1560.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1560.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "sudo cat /usr/share/ca-certificates/1560.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/15602.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "sudo cat /etc/ssl/certs/15602.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/15602.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "sudo cat /usr/share/ca-certificates/15602.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-965000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-965000 ssh "sudo systemctl is-active crio": exit status 1 (223.188388ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-965000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-965000
docker.io/kicbase/echo-server:functional-965000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-965000 image ls --format short --alsologtostderr:
I0917 01:59:59.528725    3035 out.go:345] Setting OutFile to fd 1 ...
I0917 01:59:59.528936    3035 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 01:59:59.528941    3035 out.go:358] Setting ErrFile to fd 2...
I0917 01:59:59.528945    3035 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 01:59:59.529136    3035 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
I0917 01:59:59.529782    3035 config.go:182] Loaded profile config "functional-965000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 01:59:59.529879    3035 config.go:182] Loaded profile config "functional-965000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 01:59:59.530228    3035 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0917 01:59:59.530270    3035 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0917 01:59:59.538685    3035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50997
I0917 01:59:59.539091    3035 main.go:141] libmachine: () Calling .GetVersion
I0917 01:59:59.539487    3035 main.go:141] libmachine: Using API Version  1
I0917 01:59:59.539522    3035 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 01:59:59.539766    3035 main.go:141] libmachine: () Calling .GetMachineName
I0917 01:59:59.539896    3035 main.go:141] libmachine: (functional-965000) Calling .GetState
I0917 01:59:59.539977    3035 main.go:141] libmachine: (functional-965000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0917 01:59:59.540049    3035 main.go:141] libmachine: (functional-965000) DBG | hyperkit pid from json: 2307
I0917 01:59:59.541329    3035 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0917 01:59:59.541349    3035 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0917 01:59:59.549664    3035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50999
I0917 01:59:59.550019    3035 main.go:141] libmachine: () Calling .GetVersion
I0917 01:59:59.550361    3035 main.go:141] libmachine: Using API Version  1
I0917 01:59:59.550378    3035 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 01:59:59.550580    3035 main.go:141] libmachine: () Calling .GetMachineName
I0917 01:59:59.550674    3035 main.go:141] libmachine: (functional-965000) Calling .DriverName
I0917 01:59:59.550827    3035 ssh_runner.go:195] Run: systemctl --version
I0917 01:59:59.550845    3035 main.go:141] libmachine: (functional-965000) Calling .GetSSHHostname
I0917 01:59:59.550914    3035 main.go:141] libmachine: (functional-965000) Calling .GetSSHPort
I0917 01:59:59.550986    3035 main.go:141] libmachine: (functional-965000) Calling .GetSSHKeyPath
I0917 01:59:59.551065    3035 main.go:141] libmachine: (functional-965000) Calling .GetSSHUsername
I0917 01:59:59.551154    3035 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/functional-965000/id_rsa Username:docker}
I0917 01:59:59.580991    3035 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0917 01:59:59.604859    3035 main.go:141] libmachine: Making call to close driver server
I0917 01:59:59.604868    3035 main.go:141] libmachine: (functional-965000) Calling .Close
I0917 01:59:59.605004    3035 main.go:141] libmachine: Successfully made call to close driver server
I0917 01:59:59.605013    3035 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 01:59:59.605020    3035 main.go:141] libmachine: Making call to close driver server
I0917 01:59:59.605039    3035 main.go:141] libmachine: (functional-965000) Calling .Close
I0917 01:59:59.605057    3035 main.go:141] libmachine: (functional-965000) DBG | Closing plugin on server side
I0917 01:59:59.605194    3035 main.go:141] libmachine: Successfully made call to close driver server
I0917 01:59:59.605194    3035 main.go:141] libmachine: (functional-965000) DBG | Closing plugin on server side
I0917 01:59:59.605202    3035 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-965000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-965000 | 3062d164c7e60 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| docker.io/kicbase/echo-server               | functional-965000 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-965000 image ls --format table --alsologtostderr:
I0917 01:59:59.987747    3047 out.go:345] Setting OutFile to fd 1 ...
I0917 01:59:59.987937    3047 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 01:59:59.987943    3047 out.go:358] Setting ErrFile to fd 2...
I0917 01:59:59.987946    3047 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 01:59:59.988123    3047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
I0917 01:59:59.988759    3047 config.go:182] Loaded profile config "functional-965000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 01:59:59.988856    3047 config.go:182] Loaded profile config "functional-965000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 01:59:59.989212    3047 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0917 01:59:59.989258    3047 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0917 01:59:59.997704    3047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51016
I0917 01:59:59.998148    3047 main.go:141] libmachine: () Calling .GetVersion
I0917 01:59:59.998528    3047 main.go:141] libmachine: Using API Version  1
I0917 01:59:59.998536    3047 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 01:59:59.998730    3047 main.go:141] libmachine: () Calling .GetMachineName
I0917 01:59:59.998844    3047 main.go:141] libmachine: (functional-965000) Calling .GetState
I0917 01:59:59.998932    3047 main.go:141] libmachine: (functional-965000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0917 01:59:59.998997    3047 main.go:141] libmachine: (functional-965000) DBG | hyperkit pid from json: 2307
I0917 02:00:00.000286    3047 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0917 02:00:00.000307    3047 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0917 02:00:00.009114    3047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51018
I0917 02:00:00.009484    3047 main.go:141] libmachine: () Calling .GetVersion
I0917 02:00:00.009824    3047 main.go:141] libmachine: Using API Version  1
I0917 02:00:00.009841    3047 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 02:00:00.010041    3047 main.go:141] libmachine: () Calling .GetMachineName
I0917 02:00:00.010146    3047 main.go:141] libmachine: (functional-965000) Calling .DriverName
I0917 02:00:00.010319    3047 ssh_runner.go:195] Run: systemctl --version
I0917 02:00:00.010338    3047 main.go:141] libmachine: (functional-965000) Calling .GetSSHHostname
I0917 02:00:00.010412    3047 main.go:141] libmachine: (functional-965000) Calling .GetSSHPort
I0917 02:00:00.010493    3047 main.go:141] libmachine: (functional-965000) Calling .GetSSHKeyPath
I0917 02:00:00.010594    3047 main.go:141] libmachine: (functional-965000) Calling .GetSSHUsername
I0917 02:00:00.010688    3047 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/functional-965000/id_rsa Username:docker}
I0917 02:00:00.042012    3047 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0917 02:00:00.065136    3047 main.go:141] libmachine: Making call to close driver server
I0917 02:00:00.065145    3047 main.go:141] libmachine: (functional-965000) Calling .Close
I0917 02:00:00.065295    3047 main.go:141] libmachine: Successfully made call to close driver server
I0917 02:00:00.065303    3047 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 02:00:00.065304    3047 main.go:141] libmachine: (functional-965000) DBG | Closing plugin on server side
I0917 02:00:00.065307    3047 main.go:141] libmachine: Making call to close driver server
I0917 02:00:00.065360    3047 main.go:141] libmachine: (functional-965000) Calling .Close
I0917 02:00:00.065507    3047 main.go:141] libmachine: (functional-965000) DBG | Closing plugin on server side
I0917 02:00:00.065510    3047 main.go:141] libmachine: Successfully made call to close driver server
I0917 02:00:00.065526    3047 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-965000 image ls --format json --alsologtostderr:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"3062d164c7e60433f19689b905f6eba6c260d4311ba20e54e5272dd06bd7ebbe","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-965000"],"size":"30"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e
6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/librar
y/nginx:alpine"],"size":"43200000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-965000"],"size":"4940000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-965000 image ls --format json --alsologtostderr:
I0917 01:59:59.835931    3043 out.go:345] Setting OutFile to fd 1 ...
I0917 01:59:59.836229    3043 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 01:59:59.836235    3043 out.go:358] Setting ErrFile to fd 2...
I0917 01:59:59.836238    3043 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 01:59:59.836421    3043 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
I0917 01:59:59.837069    3043 config.go:182] Loaded profile config "functional-965000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 01:59:59.837160    3043 config.go:182] Loaded profile config "functional-965000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 01:59:59.837526    3043 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0917 01:59:59.837571    3043 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0917 01:59:59.845873    3043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51010
I0917 01:59:59.846366    3043 main.go:141] libmachine: () Calling .GetVersion
I0917 01:59:59.846775    3043 main.go:141] libmachine: Using API Version  1
I0917 01:59:59.846786    3043 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 01:59:59.847045    3043 main.go:141] libmachine: () Calling .GetMachineName
I0917 01:59:59.847164    3043 main.go:141] libmachine: (functional-965000) Calling .GetState
I0917 01:59:59.847257    3043 main.go:141] libmachine: (functional-965000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0917 01:59:59.847323    3043 main.go:141] libmachine: (functional-965000) DBG | hyperkit pid from json: 2307
I0917 01:59:59.848639    3043 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0917 01:59:59.848660    3043 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0917 01:59:59.857129    3043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51012
I0917 01:59:59.857506    3043 main.go:141] libmachine: () Calling .GetVersion
I0917 01:59:59.857831    3043 main.go:141] libmachine: Using API Version  1
I0917 01:59:59.857849    3043 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 01:59:59.858082    3043 main.go:141] libmachine: () Calling .GetMachineName
I0917 01:59:59.858210    3043 main.go:141] libmachine: (functional-965000) Calling .DriverName
I0917 01:59:59.858378    3043 ssh_runner.go:195] Run: systemctl --version
I0917 01:59:59.858395    3043 main.go:141] libmachine: (functional-965000) Calling .GetSSHHostname
I0917 01:59:59.858475    3043 main.go:141] libmachine: (functional-965000) Calling .GetSSHPort
I0917 01:59:59.858546    3043 main.go:141] libmachine: (functional-965000) Calling .GetSSHKeyPath
I0917 01:59:59.858635    3043 main.go:141] libmachine: (functional-965000) Calling .GetSSHUsername
I0917 01:59:59.858719    3043 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/functional-965000/id_rsa Username:docker}
I0917 01:59:59.889253    3043 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0917 01:59:59.906938    3043 main.go:141] libmachine: Making call to close driver server
I0917 01:59:59.906947    3043 main.go:141] libmachine: (functional-965000) Calling .Close
I0917 01:59:59.907096    3043 main.go:141] libmachine: Successfully made call to close driver server
I0917 01:59:59.907116    3043 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 01:59:59.907128    3043 main.go:141] libmachine: Making call to close driver server
I0917 01:59:59.907134    3043 main.go:141] libmachine: (functional-965000) Calling .Close
I0917 01:59:59.907135    3043 main.go:141] libmachine: (functional-965000) DBG | Closing plugin on server side
I0917 01:59:59.907269    3043 main.go:141] libmachine: Successfully made call to close driver server
I0917 01:59:59.907279    3043 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 01:59:59.907304    3043 main.go:141] libmachine: (functional-965000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-965000 image ls --format yaml --alsologtostderr:
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 3062d164c7e60433f19689b905f6eba6c260d4311ba20e54e5272dd06bd7ebbe
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-965000
size: "30"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-965000
size: "4940000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-965000 image ls --format yaml --alsologtostderr:
I0917 01:59:59.684637    3039 out.go:345] Setting OutFile to fd 1 ...
I0917 01:59:59.685225    3039 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 01:59:59.685232    3039 out.go:358] Setting ErrFile to fd 2...
I0917 01:59:59.685236    3039 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 01:59:59.685781    3039 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
I0917 01:59:59.686415    3039 config.go:182] Loaded profile config "functional-965000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 01:59:59.686508    3039 config.go:182] Loaded profile config "functional-965000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 01:59:59.686839    3039 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0917 01:59:59.686886    3039 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0917 01:59:59.695341    3039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51004
I0917 01:59:59.695771    3039 main.go:141] libmachine: () Calling .GetVersion
I0917 01:59:59.696164    3039 main.go:141] libmachine: Using API Version  1
I0917 01:59:59.696173    3039 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 01:59:59.696417    3039 main.go:141] libmachine: () Calling .GetMachineName
I0917 01:59:59.696592    3039 main.go:141] libmachine: (functional-965000) Calling .GetState
I0917 01:59:59.696680    3039 main.go:141] libmachine: (functional-965000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0917 01:59:59.696745    3039 main.go:141] libmachine: (functional-965000) DBG | hyperkit pid from json: 2307
I0917 01:59:59.698022    3039 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0917 01:59:59.698045    3039 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0917 01:59:59.706332    3039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51006
I0917 01:59:59.706675    3039 main.go:141] libmachine: () Calling .GetVersion
I0917 01:59:59.706996    3039 main.go:141] libmachine: Using API Version  1
I0917 01:59:59.707004    3039 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 01:59:59.707215    3039 main.go:141] libmachine: () Calling .GetMachineName
I0917 01:59:59.707333    3039 main.go:141] libmachine: (functional-965000) Calling .DriverName
I0917 01:59:59.707491    3039 ssh_runner.go:195] Run: systemctl --version
I0917 01:59:59.707512    3039 main.go:141] libmachine: (functional-965000) Calling .GetSSHHostname
I0917 01:59:59.707596    3039 main.go:141] libmachine: (functional-965000) Calling .GetSSHPort
I0917 01:59:59.707677    3039 main.go:141] libmachine: (functional-965000) Calling .GetSSHKeyPath
I0917 01:59:59.707748    3039 main.go:141] libmachine: (functional-965000) Calling .GetSSHUsername
I0917 01:59:59.707860    3039 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/functional-965000/id_rsa Username:docker}
I0917 01:59:59.738448    3039 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0917 01:59:59.755332    3039 main.go:141] libmachine: Making call to close driver server
I0917 01:59:59.755340    3039 main.go:141] libmachine: (functional-965000) Calling .Close
I0917 01:59:59.755498    3039 main.go:141] libmachine: Successfully made call to close driver server
I0917 01:59:59.755506    3039 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 01:59:59.755515    3039 main.go:141] libmachine: Making call to close driver server
I0917 01:59:59.755519    3039 main.go:141] libmachine: (functional-965000) Calling .Close
I0917 01:59:59.755521    3039 main.go:141] libmachine: (functional-965000) DBG | Closing plugin on server side
I0917 01:59:59.755678    3039 main.go:141] libmachine: Successfully made call to close driver server
I0917 01:59:59.755689    3039 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 01:59:59.755700    3039 main.go:141] libmachine: (functional-965000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-965000 ssh pgrep buildkitd: exit status 1 (130.894753ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 image build -t localhost/my-image:functional-965000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-amd64 -p functional-965000 image build -t localhost/my-image:functional-965000 testdata/build --alsologtostderr: (2.232599589s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-965000 image build -t localhost/my-image:functional-965000 testdata/build --alsologtostderr:
I0917 02:00:00.287567    3058 out.go:345] Setting OutFile to fd 1 ...
I0917 02:00:00.287885    3058 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 02:00:00.287892    3058 out.go:358] Setting ErrFile to fd 2...
I0917 02:00:00.287896    3058 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 02:00:00.288116    3058 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
I0917 02:00:00.288883    3058 config.go:182] Loaded profile config "functional-965000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 02:00:00.290006    3058 config.go:182] Loaded profile config "functional-965000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 02:00:00.290443    3058 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0917 02:00:00.290496    3058 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0917 02:00:00.300941    3058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51029
I0917 02:00:00.301481    3058 main.go:141] libmachine: () Calling .GetVersion
I0917 02:00:00.302023    3058 main.go:141] libmachine: Using API Version  1
I0917 02:00:00.302057    3058 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 02:00:00.302349    3058 main.go:141] libmachine: () Calling .GetMachineName
I0917 02:00:00.302507    3058 main.go:141] libmachine: (functional-965000) Calling .GetState
I0917 02:00:00.302605    3058 main.go:141] libmachine: (functional-965000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0917 02:00:00.302720    3058 main.go:141] libmachine: (functional-965000) DBG | hyperkit pid from json: 2307
I0917 02:00:00.304373    3058 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0917 02:00:00.304405    3058 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0917 02:00:00.314651    3058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51031
I0917 02:00:00.315099    3058 main.go:141] libmachine: () Calling .GetVersion
I0917 02:00:00.315512    3058 main.go:141] libmachine: Using API Version  1
I0917 02:00:00.315528    3058 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 02:00:00.315825    3058 main.go:141] libmachine: () Calling .GetMachineName
I0917 02:00:00.316013    3058 main.go:141] libmachine: (functional-965000) Calling .DriverName
I0917 02:00:00.316199    3058 ssh_runner.go:195] Run: systemctl --version
I0917 02:00:00.316221    3058 main.go:141] libmachine: (functional-965000) Calling .GetSSHHostname
I0917 02:00:00.316343    3058 main.go:141] libmachine: (functional-965000) Calling .GetSSHPort
I0917 02:00:00.316440    3058 main.go:141] libmachine: (functional-965000) Calling .GetSSHKeyPath
I0917 02:00:00.316546    3058 main.go:141] libmachine: (functional-965000) Calling .GetSSHUsername
I0917 02:00:00.316652    3058 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/functional-965000/id_rsa Username:docker}
I0917 02:00:00.356584    3058 build_images.go:161] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.818121934.tar
I0917 02:00:00.356714    3058 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0917 02:00:00.367042    3058 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.818121934.tar
I0917 02:00:00.374694    3058 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.818121934.tar: stat -c "%s %y" /var/lib/minikube/build/build.818121934.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.818121934.tar': No such file or directory
I0917 02:00:00.374734    3058 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.818121934.tar --> /var/lib/minikube/build/build.818121934.tar (3072 bytes)
I0917 02:00:00.406747    3058 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.818121934
I0917 02:00:00.418443    3058 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.818121934 -xf /var/lib/minikube/build/build.818121934.tar
I0917 02:00:00.430453    3058 docker.go:360] Building image: /var/lib/minikube/build/build.818121934
I0917 02:00:00.430559    3058 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-965000 /var/lib/minikube/build/build.818121934
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:743fd5b28eb7c6093fe3f55c8a737d594ff0120f32a564c4a11a1990985383f9 done
#8 naming to localhost/my-image:functional-965000 done
#8 DONE 0.0s
I0917 02:00:02.407532    3058 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-965000 /var/lib/minikube/build/build.818121934: (1.976940885s)
I0917 02:00:02.407602    3058 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.818121934
I0917 02:00:02.415856    3058 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.818121934.tar
I0917 02:00:02.426400    3058 build_images.go:217] Built localhost/my-image:functional-965000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.818121934.tar
I0917 02:00:02.426424    3058 build_images.go:133] succeeded building to: functional-965000
I0917 02:00:02.426440    3058 build_images.go:134] failed building to: 
I0917 02:00:02.426464    3058 main.go:141] libmachine: Making call to close driver server
I0917 02:00:02.426470    3058 main.go:141] libmachine: (functional-965000) Calling .Close
I0917 02:00:02.426637    3058 main.go:141] libmachine: Successfully made call to close driver server
I0917 02:00:02.426643    3058 main.go:141] libmachine: (functional-965000) DBG | Closing plugin on server side
I0917 02:00:02.426659    3058 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 02:00:02.426667    3058 main.go:141] libmachine: Making call to close driver server
I0917 02:00:02.426672    3058 main.go:141] libmachine: (functional-965000) Calling .Close
I0917 02:00:02.426806    3058 main.go:141] libmachine: (functional-965000) DBG | Closing plugin on server side
I0917 02:00:02.426815    3058 main.go:141] libmachine: Successfully made call to close driver server
I0917 02:00:02.426826    3058 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 image ls
2024/09/17 02:00:12 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.835067344s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-965000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-965000 docker-env) && out/minikube-darwin-amd64 status -p functional-965000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-965000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 image load --daemon kicbase/echo-server:functional-965000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 image load --daemon kicbase/echo-server:functional-965000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-965000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 image load --daemon kicbase/echo-server:functional-965000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 image save kicbase/echo-server:functional-965000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 image rm kicbase/echo-server:functional-965000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-965000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 image save --daemon kicbase/echo-server:functional-965000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-965000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (23.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-965000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-965000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-d25h4" [b7ed8175-4d15-4d55-80d8-dfb41e232997] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-d25h4" [b7ed8175-4d15-4d55-80d8-dfb41e232997] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 23.004051244s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (23.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-965000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-965000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-965000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2737: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-965000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-965000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-965000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [db78d1fd-4c68-4322-9522-4d171c9e0e44] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [db78d1fd-4c68-4322-9522-4d171c9e0e44] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.002017349s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 service list -o json
functional_test.go:1494: Took "375.568508ms" to run "out/minikube-darwin-amd64 -p functional-965000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.169.0.4:31000
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.169.0.4:31000
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-965000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.168.88 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-965000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1315: Took "175.18266ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1329: Took "78.96535ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1366: Took "181.344406ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1379: Took "78.227593ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-965000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port1031324420/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726563586215102000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port1031324420/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726563586215102000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port1031324420/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726563586215102000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port1031324420/001/test-1726563586215102000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-965000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (156.765543ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 17 08:59 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 17 08:59 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 17 08:59 test-1726563586215102000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh cat /mount-9p/test-1726563586215102000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-965000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4cdf3931-82d7-4431-8dcb-c87036dc4efb] Pending
helpers_test.go:344: "busybox-mount" [4cdf3931-82d7-4431-8dcb-c87036dc4efb] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4cdf3931-82d7-4431-8dcb-c87036dc4efb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4cdf3931-82d7-4431-8dcb-c87036dc4efb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004327276s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-965000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-965000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port1031324420/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-965000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port4219480012/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-965000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (154.95761ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-965000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port4219480012/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-965000 ssh "sudo umount -f /mount-9p": exit status 1 (190.249787ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-965000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-965000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port4219480012/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-965000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1906786629/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-965000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1906786629/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-965000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1906786629/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-965000 ssh "findmnt -T" /mount1: exit status 1 (213.414676ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-965000 ssh "findmnt -T" /mount1: exit status 1 (215.98548ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-965000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-965000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-965000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1906786629/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-965000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1906786629/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-965000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1906786629/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.34s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-965000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-965000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-965000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (194.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-857000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E0917 02:01:35.742343    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:02:03.454001    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-857000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m14.594689475s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (194.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-857000 -- rollout status deployment/busybox: (4.375026388s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- exec busybox-7dff88458-4jzg8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- exec busybox-7dff88458-5x9l8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- exec busybox-7dff88458-mhjf6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- exec busybox-7dff88458-4jzg8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- exec busybox-7dff88458-5x9l8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- exec busybox-7dff88458-mhjf6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- exec busybox-7dff88458-4jzg8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- exec busybox-7dff88458-5x9l8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- exec busybox-7dff88458-mhjf6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- exec busybox-7dff88458-4jzg8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- exec busybox-7dff88458-4jzg8 -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- exec busybox-7dff88458-5x9l8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- exec busybox-7dff88458-5x9l8 -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- exec busybox-7dff88458-mhjf6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-857000 -- exec busybox-7dff88458-mhjf6 -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-857000 -v=7 --alsologtostderr
E0917 02:03:59.146502    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:03:59.152671    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:03:59.164347    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:03:59.185774    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:03:59.227731    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:03:59.310425    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:03:59.471851    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:03:59.794483    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:04:00.436664    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:04:01.719387    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:04:04.281010    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:04:09.403332    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:04:19.645803    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-857000 -v=7 --alsologtostderr: (52.563404951s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-857000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (9.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp testdata/cp-test.txt ha-857000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp ha-857000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3490570692/001/cp-test_ha-857000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp ha-857000:/home/docker/cp-test.txt ha-857000-m02:/home/docker/cp-test_ha-857000_ha-857000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m02 "sudo cat /home/docker/cp-test_ha-857000_ha-857000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp ha-857000:/home/docker/cp-test.txt ha-857000-m03:/home/docker/cp-test_ha-857000_ha-857000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m03 "sudo cat /home/docker/cp-test_ha-857000_ha-857000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp ha-857000:/home/docker/cp-test.txt ha-857000-m04:/home/docker/cp-test_ha-857000_ha-857000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m04 "sudo cat /home/docker/cp-test_ha-857000_ha-857000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp testdata/cp-test.txt ha-857000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp ha-857000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3490570692/001/cp-test_ha-857000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp ha-857000-m02:/home/docker/cp-test.txt ha-857000:/home/docker/cp-test_ha-857000-m02_ha-857000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000 "sudo cat /home/docker/cp-test_ha-857000-m02_ha-857000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp ha-857000-m02:/home/docker/cp-test.txt ha-857000-m03:/home/docker/cp-test_ha-857000-m02_ha-857000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m03 "sudo cat /home/docker/cp-test_ha-857000-m02_ha-857000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp ha-857000-m02:/home/docker/cp-test.txt ha-857000-m04:/home/docker/cp-test_ha-857000-m02_ha-857000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m04 "sudo cat /home/docker/cp-test_ha-857000-m02_ha-857000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp testdata/cp-test.txt ha-857000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp ha-857000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3490570692/001/cp-test_ha-857000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m03 "sudo cat /home/docker/cp-test.txt"
E0917 02:04:40.128253    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp ha-857000-m03:/home/docker/cp-test.txt ha-857000:/home/docker/cp-test_ha-857000-m03_ha-857000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000 "sudo cat /home/docker/cp-test_ha-857000-m03_ha-857000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp ha-857000-m03:/home/docker/cp-test.txt ha-857000-m02:/home/docker/cp-test_ha-857000-m03_ha-857000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m02 "sudo cat /home/docker/cp-test_ha-857000-m03_ha-857000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp ha-857000-m03:/home/docker/cp-test.txt ha-857000-m04:/home/docker/cp-test_ha-857000-m03_ha-857000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m04 "sudo cat /home/docker/cp-test_ha-857000-m03_ha-857000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp testdata/cp-test.txt ha-857000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3490570692/001/cp-test_ha-857000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt ha-857000:/home/docker/cp-test_ha-857000-m04_ha-857000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000 "sudo cat /home/docker/cp-test_ha-857000-m04_ha-857000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt ha-857000-m02:/home/docker/cp-test_ha-857000-m04_ha-857000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m02 "sudo cat /home/docker/cp-test_ha-857000-m04_ha-857000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 cp ha-857000-m04:/home/docker/cp-test.txt ha-857000-m03:/home/docker/cp-test_ha-857000-m04_ha-857000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 ssh -n ha-857000-m03 "sudo cat /home/docker/cp-test_ha-857000-m04_ha-857000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (9.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-857000 node stop m02 -v=7 --alsologtostderr: (8.404267295s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr: exit status 7 (350.086592ms)

                                                
                                                
-- stdout --
	ha-857000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-857000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-857000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-857000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:04:52.366230    3881 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:04:52.366524    3881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:04:52.366530    3881 out.go:358] Setting ErrFile to fd 2...
	I0917 02:04:52.366534    3881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:04:52.366714    3881 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:04:52.366898    3881 out.go:352] Setting JSON to false
	I0917 02:04:52.366921    3881 mustload.go:65] Loading cluster: ha-857000
	I0917 02:04:52.366962    3881 notify.go:220] Checking for updates...
	I0917 02:04:52.367279    3881 config.go:182] Loaded profile config "ha-857000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:04:52.367292    3881 status.go:255] checking status of ha-857000 ...
	I0917 02:04:52.367732    3881 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:04:52.367777    3881 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:04:52.376571    3881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51777
	I0917 02:04:52.377006    3881 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:04:52.377453    3881 main.go:141] libmachine: Using API Version  1
	I0917 02:04:52.377463    3881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:04:52.377671    3881 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:04:52.377786    3881 main.go:141] libmachine: (ha-857000) Calling .GetState
	I0917 02:04:52.377873    3881 main.go:141] libmachine: (ha-857000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:04:52.377949    3881 main.go:141] libmachine: (ha-857000) DBG | hyperkit pid from json: 3402
	I0917 02:04:52.378964    3881 status.go:330] ha-857000 host status = "Running" (err=<nil>)
	I0917 02:04:52.378981    3881 host.go:66] Checking if "ha-857000" exists ...
	I0917 02:04:52.379239    3881 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:04:52.379259    3881 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:04:52.387522    3881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51779
	I0917 02:04:52.387871    3881 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:04:52.388181    3881 main.go:141] libmachine: Using API Version  1
	I0917 02:04:52.388197    3881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:04:52.388429    3881 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:04:52.388565    3881 main.go:141] libmachine: (ha-857000) Calling .GetIP
	I0917 02:04:52.388647    3881 host.go:66] Checking if "ha-857000" exists ...
	I0917 02:04:52.388919    3881 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:04:52.388946    3881 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:04:52.397426    3881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51781
	I0917 02:04:52.397785    3881 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:04:52.398111    3881 main.go:141] libmachine: Using API Version  1
	I0917 02:04:52.398125    3881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:04:52.398347    3881 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:04:52.398451    3881 main.go:141] libmachine: (ha-857000) Calling .DriverName
	I0917 02:04:52.398596    3881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 02:04:52.398615    3881 main.go:141] libmachine: (ha-857000) Calling .GetSSHHostname
	I0917 02:04:52.398686    3881 main.go:141] libmachine: (ha-857000) Calling .GetSSHPort
	I0917 02:04:52.398760    3881 main.go:141] libmachine: (ha-857000) Calling .GetSSHKeyPath
	I0917 02:04:52.398848    3881 main.go:141] libmachine: (ha-857000) Calling .GetSSHUsername
	I0917 02:04:52.398931    3881 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000/id_rsa Username:docker}
	I0917 02:04:52.429579    3881 ssh_runner.go:195] Run: systemctl --version
	I0917 02:04:52.434357    3881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:04:52.445489    3881 kubeconfig.go:125] found "ha-857000" server: "https://192.169.0.254:8443"
	I0917 02:04:52.445511    3881 api_server.go:166] Checking apiserver status ...
	I0917 02:04:52.445553    3881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:04:52.456467    3881 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2041/cgroup
	W0917 02:04:52.464179    3881 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2041/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:04:52.464245    3881 ssh_runner.go:195] Run: ls
	I0917 02:04:52.467728    3881 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0917 02:04:52.471100    3881 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0917 02:04:52.471118    3881 status.go:422] ha-857000 apiserver status = Running (err=<nil>)
	I0917 02:04:52.471127    3881 status.go:257] ha-857000 status: &{Name:ha-857000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 02:04:52.471138    3881 status.go:255] checking status of ha-857000-m02 ...
	I0917 02:04:52.471414    3881 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:04:52.471433    3881 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:04:52.480066    3881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51785
	I0917 02:04:52.480450    3881 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:04:52.480754    3881 main.go:141] libmachine: Using API Version  1
	I0917 02:04:52.480764    3881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:04:52.480941    3881 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:04:52.481054    3881 main.go:141] libmachine: (ha-857000-m02) Calling .GetState
	I0917 02:04:52.481135    3881 main.go:141] libmachine: (ha-857000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:04:52.481213    3881 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid from json: 3419
	I0917 02:04:52.482260    3881 main.go:141] libmachine: (ha-857000-m02) DBG | hyperkit pid 3419 missing from process table
	I0917 02:04:52.482304    3881 status.go:330] ha-857000-m02 host status = "Stopped" (err=<nil>)
	I0917 02:04:52.482316    3881 status.go:343] host is not running, skipping remaining checks
	I0917 02:04:52.482322    3881 status.go:257] ha-857000-m02 status: &{Name:ha-857000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 02:04:52.482338    3881 status.go:255] checking status of ha-857000-m03 ...
	I0917 02:04:52.482621    3881 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:04:52.482648    3881 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:04:52.491355    3881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51787
	I0917 02:04:52.491700    3881 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:04:52.492020    3881 main.go:141] libmachine: Using API Version  1
	I0917 02:04:52.492030    3881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:04:52.492254    3881 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:04:52.492369    3881 main.go:141] libmachine: (ha-857000-m03) Calling .GetState
	I0917 02:04:52.492442    3881 main.go:141] libmachine: (ha-857000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:04:52.492526    3881 main.go:141] libmachine: (ha-857000-m03) DBG | hyperkit pid from json: 3442
	I0917 02:04:52.493507    3881 status.go:330] ha-857000-m03 host status = "Running" (err=<nil>)
	I0917 02:04:52.493516    3881 host.go:66] Checking if "ha-857000-m03" exists ...
	I0917 02:04:52.493778    3881 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:04:52.493802    3881 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:04:52.502454    3881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51789
	I0917 02:04:52.502802    3881 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:04:52.503115    3881 main.go:141] libmachine: Using API Version  1
	I0917 02:04:52.503129    3881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:04:52.503323    3881 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:04:52.503428    3881 main.go:141] libmachine: (ha-857000-m03) Calling .GetIP
	I0917 02:04:52.503513    3881 host.go:66] Checking if "ha-857000-m03" exists ...
	I0917 02:04:52.503796    3881 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:04:52.503821    3881 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:04:52.512182    3881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51791
	I0917 02:04:52.512524    3881 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:04:52.512862    3881 main.go:141] libmachine: Using API Version  1
	I0917 02:04:52.512878    3881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:04:52.513095    3881 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:04:52.513210    3881 main.go:141] libmachine: (ha-857000-m03) Calling .DriverName
	I0917 02:04:52.513341    3881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 02:04:52.513353    3881 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHHostname
	I0917 02:04:52.513426    3881 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHPort
	I0917 02:04:52.513503    3881 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHKeyPath
	I0917 02:04:52.513588    3881 main.go:141] libmachine: (ha-857000-m03) Calling .GetSSHUsername
	I0917 02:04:52.513661    3881 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m03/id_rsa Username:docker}
	I0917 02:04:52.545501    3881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:04:52.557045    3881 kubeconfig.go:125] found "ha-857000" server: "https://192.169.0.254:8443"
	I0917 02:04:52.557059    3881 api_server.go:166] Checking apiserver status ...
	I0917 02:04:52.557108    3881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:04:52.568524    3881 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1880/cgroup
	W0917 02:04:52.575842    3881 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1880/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:04:52.575906    3881 ssh_runner.go:195] Run: ls
	I0917 02:04:52.579210    3881 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0917 02:04:52.582307    3881 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0917 02:04:52.582318    3881 status.go:422] ha-857000-m03 apiserver status = Running (err=<nil>)
	I0917 02:04:52.582324    3881 status.go:257] ha-857000-m03 status: &{Name:ha-857000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 02:04:52.582334    3881 status.go:255] checking status of ha-857000-m04 ...
	I0917 02:04:52.582619    3881 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:04:52.582641    3881 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:04:52.591250    3881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51795
	I0917 02:04:52.591604    3881 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:04:52.591979    3881 main.go:141] libmachine: Using API Version  1
	I0917 02:04:52.591996    3881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:04:52.592219    3881 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:04:52.592320    3881 main.go:141] libmachine: (ha-857000-m04) Calling .GetState
	I0917 02:04:52.592398    3881 main.go:141] libmachine: (ha-857000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:04:52.592497    3881 main.go:141] libmachine: (ha-857000-m04) DBG | hyperkit pid from json: 3550
	I0917 02:04:52.593596    3881 status.go:330] ha-857000-m04 host status = "Running" (err=<nil>)
	I0917 02:04:52.593606    3881 host.go:66] Checking if "ha-857000-m04" exists ...
	I0917 02:04:52.593887    3881 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:04:52.593908    3881 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:04:52.602277    3881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51797
	I0917 02:04:52.602627    3881 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:04:52.602939    3881 main.go:141] libmachine: Using API Version  1
	I0917 02:04:52.602948    3881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:04:52.603172    3881 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:04:52.603293    3881 main.go:141] libmachine: (ha-857000-m04) Calling .GetIP
	I0917 02:04:52.603386    3881 host.go:66] Checking if "ha-857000-m04" exists ...
	I0917 02:04:52.603644    3881 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:04:52.603670    3881 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:04:52.612113    3881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51799
	I0917 02:04:52.612446    3881 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:04:52.612757    3881 main.go:141] libmachine: Using API Version  1
	I0917 02:04:52.612765    3881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:04:52.612959    3881 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:04:52.613067    3881 main.go:141] libmachine: (ha-857000-m04) Calling .DriverName
	I0917 02:04:52.613188    3881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 02:04:52.613199    3881 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHHostname
	I0917 02:04:52.613270    3881 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHPort
	I0917 02:04:52.613343    3881 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHKeyPath
	I0917 02:04:52.613426    3881 main.go:141] libmachine: (ha-857000-m04) Calling .GetSSHUsername
	I0917 02:04:52.613492    3881 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/ha-857000-m04/id_rsa Username:docker}
	I0917 02:04:52.647625    3881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:04:52.659086    3881 status.go:257] ha-857000-m04 status: &{Name:ha-857000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 node start m02 -v=7 --alsologtostderr
E0917 02:05:21.091031    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-857000 node start m02 -v=7 --alsologtostderr: (42.685872598s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-857000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (43.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.35s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (40.15s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-585000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-585000 --driver=hyperkit : (40.149248947s)
--- PASS: TestImageBuild/serial/Setup (40.15s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-585000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-585000: (1.758517197s)
--- PASS: TestImageBuild/serial/NormalBuild (1.76s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-585000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.82s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-585000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.66s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.54s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-585000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-513000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0917 02:16:35.827643    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-513000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (1m22.855210366s)
--- PASS: TestJSONOutput/start/Command (82.86s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-513000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-513000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.32s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-513000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-513000 --output=json --user=testUser: (8.31623905s)
--- PASS: TestJSONOutput/stop/Command (8.32s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.74s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-183000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-183000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (524.060454ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c14cfb1e-947b-4e1e-a8bc-a433b2c721e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-183000] minikube v1.34.0 on Darwin 14.6.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5770dcbc-9126-45b2-b10e-7a1dd11393d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19648"}}
	{"specversion":"1.0","id":"a20aba66-9e39-4419-882b-8bccacbda593","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig"}}
	{"specversion":"1.0","id":"241b12c6-3db4-4082-a413-d6c82220a04b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"5ef4d206-3619-4f7b-b9e2-c52a65d14331","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"042d6010-50f9-4beb-8c1f-539157b078d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube"}}
	{"specversion":"1.0","id":"bd7d7327-c774-4a5c-adf2-4f6b7be086ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0a781334-d0c9-408f-a2b4-ee5ed4772857","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-183000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-183000
--- PASS: TestErrorJSONOutput (0.74s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (86.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-585000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-585000 --driver=hyperkit : (37.210057645s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-597000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-597000 --driver=hyperkit : (40.348510502s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-585000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-597000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-597000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-597000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-597000: (3.374003984s)
helpers_test.go:175: Cleaning up "first-585000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-585000
E0917 02:18:59.139692    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-585000: (5.239068784s)
--- PASS: TestMinikubeProfile (86.98s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (105.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-232000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0917 02:21:35.806827    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-232000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m45.745394662s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (105.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-232000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-232000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-232000 -- rollout status deployment/busybox: (3.322684902s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-232000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-232000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-232000 -- exec busybox-7dff88458-7npgw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-232000 -- exec busybox-7dff88458-8tvvp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-232000 -- exec busybox-7dff88458-7npgw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-232000 -- exec busybox-7dff88458-8tvvp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-232000 -- exec busybox-7dff88458-7npgw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-232000 -- exec busybox-7dff88458-8tvvp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-232000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-232000 -- exec busybox-7dff88458-7npgw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-232000 -- exec busybox-7dff88458-7npgw -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-232000 -- exec busybox-7dff88458-8tvvp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-232000 -- exec busybox-7dff88458-8tvvp -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-232000 -v 3 --alsologtostderr
E0917 02:23:59.141160    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-232000 -v 3 --alsologtostderr: (48.0252769s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.34s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-232000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 cp testdata/cp-test.txt multinode-232000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 ssh -n multinode-232000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 cp multinode-232000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1710989141/001/cp-test_multinode-232000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 ssh -n multinode-232000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 cp multinode-232000:/home/docker/cp-test.txt multinode-232000-m02:/home/docker/cp-test_multinode-232000_multinode-232000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 ssh -n multinode-232000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 ssh -n multinode-232000-m02 "sudo cat /home/docker/cp-test_multinode-232000_multinode-232000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 cp multinode-232000:/home/docker/cp-test.txt multinode-232000-m03:/home/docker/cp-test_multinode-232000_multinode-232000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 ssh -n multinode-232000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 ssh -n multinode-232000-m03 "sudo cat /home/docker/cp-test_multinode-232000_multinode-232000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 cp testdata/cp-test.txt multinode-232000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 ssh -n multinode-232000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 cp multinode-232000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1710989141/001/cp-test_multinode-232000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 ssh -n multinode-232000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 cp multinode-232000-m02:/home/docker/cp-test.txt multinode-232000:/home/docker/cp-test_multinode-232000-m02_multinode-232000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 ssh -n multinode-232000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 ssh -n multinode-232000 "sudo cat /home/docker/cp-test_multinode-232000-m02_multinode-232000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 cp multinode-232000-m02:/home/docker/cp-test.txt multinode-232000-m03:/home/docker/cp-test_multinode-232000-m02_multinode-232000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 ssh -n multinode-232000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 ssh -n multinode-232000-m03 "sudo cat /home/docker/cp-test_multinode-232000-m02_multinode-232000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 cp testdata/cp-test.txt multinode-232000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 ssh -n multinode-232000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 cp multinode-232000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1710989141/001/cp-test_multinode-232000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 ssh -n multinode-232000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 cp multinode-232000-m03:/home/docker/cp-test.txt multinode-232000:/home/docker/cp-test_multinode-232000-m03_multinode-232000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 ssh -n multinode-232000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 ssh -n multinode-232000 "sudo cat /home/docker/cp-test_multinode-232000-m03_multinode-232000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 cp multinode-232000-m03:/home/docker/cp-test.txt multinode-232000-m02:/home/docker/cp-test_multinode-232000-m03_multinode-232000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 ssh -n multinode-232000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 ssh -n multinode-232000-m02 "sudo cat /home/docker/cp-test_multinode-232000-m03_multinode-232000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-232000 node stop m03: (2.334688906s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-232000 status: exit status 7 (249.576589ms)

                                                
                                                
-- stdout --
	multinode-232000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-232000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-232000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-232000 status --alsologtostderr: exit status 7 (243.586104ms)

                                                
                                                
-- stdout --
	multinode-232000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-232000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-232000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:24:10.243210    5142 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:24:10.243459    5142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:24:10.243465    5142 out.go:358] Setting ErrFile to fd 2...
	I0917 02:24:10.243469    5142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:24:10.243659    5142 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:24:10.243856    5142 out.go:352] Setting JSON to false
	I0917 02:24:10.243878    5142 mustload.go:65] Loading cluster: multinode-232000
	I0917 02:24:10.243916    5142 notify.go:220] Checking for updates...
	I0917 02:24:10.244253    5142 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:24:10.244267    5142 status.go:255] checking status of multinode-232000 ...
	I0917 02:24:10.244694    5142 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:24:10.244744    5142 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:24:10.253499    5142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53331
	I0917 02:24:10.253857    5142 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:24:10.254241    5142 main.go:141] libmachine: Using API Version  1
	I0917 02:24:10.254252    5142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:24:10.254491    5142 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:24:10.254613    5142 main.go:141] libmachine: (multinode-232000) Calling .GetState
	I0917 02:24:10.254691    5142 main.go:141] libmachine: (multinode-232000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:24:10.254755    5142 main.go:141] libmachine: (multinode-232000) DBG | hyperkit pid from json: 4780
	I0917 02:24:10.255925    5142 status.go:330] multinode-232000 host status = "Running" (err=<nil>)
	I0917 02:24:10.255943    5142 host.go:66] Checking if "multinode-232000" exists ...
	I0917 02:24:10.256194    5142 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:24:10.256222    5142 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:24:10.264581    5142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53333
	I0917 02:24:10.264938    5142 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:24:10.265234    5142 main.go:141] libmachine: Using API Version  1
	I0917 02:24:10.265243    5142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:24:10.265502    5142 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:24:10.265642    5142 main.go:141] libmachine: (multinode-232000) Calling .GetIP
	I0917 02:24:10.265726    5142 host.go:66] Checking if "multinode-232000" exists ...
	I0917 02:24:10.265998    5142 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:24:10.266031    5142 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:24:10.274660    5142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53335
	I0917 02:24:10.275012    5142 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:24:10.275344    5142 main.go:141] libmachine: Using API Version  1
	I0917 02:24:10.275360    5142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:24:10.275580    5142 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:24:10.275693    5142 main.go:141] libmachine: (multinode-232000) Calling .DriverName
	I0917 02:24:10.275824    5142 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 02:24:10.275842    5142 main.go:141] libmachine: (multinode-232000) Calling .GetSSHHostname
	I0917 02:24:10.275918    5142 main.go:141] libmachine: (multinode-232000) Calling .GetSSHPort
	I0917 02:24:10.275995    5142 main.go:141] libmachine: (multinode-232000) Calling .GetSSHKeyPath
	I0917 02:24:10.276089    5142 main.go:141] libmachine: (multinode-232000) Calling .GetSSHUsername
	I0917 02:24:10.276171    5142 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000/id_rsa Username:docker}
	I0917 02:24:10.306677    5142 ssh_runner.go:195] Run: systemctl --version
	I0917 02:24:10.310932    5142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:24:10.322326    5142 kubeconfig.go:125] found "multinode-232000" server: "https://192.169.0.14:8443"
	I0917 02:24:10.322349    5142 api_server.go:166] Checking apiserver status ...
	I0917 02:24:10.322393    5142 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:24:10.333215    5142 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1955/cgroup
	W0917 02:24:10.340561    5142 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1955/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:24:10.340607    5142 ssh_runner.go:195] Run: ls
	I0917 02:24:10.343677    5142 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0917 02:24:10.346722    5142 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0917 02:24:10.346733    5142 status.go:422] multinode-232000 apiserver status = Running (err=<nil>)
	I0917 02:24:10.346741    5142 status.go:257] multinode-232000 status: &{Name:multinode-232000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 02:24:10.346753    5142 status.go:255] checking status of multinode-232000-m02 ...
	I0917 02:24:10.347043    5142 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:24:10.347062    5142 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:24:10.355580    5142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53339
	I0917 02:24:10.355934    5142 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:24:10.356249    5142 main.go:141] libmachine: Using API Version  1
	I0917 02:24:10.356259    5142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:24:10.356479    5142 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:24:10.356594    5142 main.go:141] libmachine: (multinode-232000-m02) Calling .GetState
	I0917 02:24:10.356671    5142 main.go:141] libmachine: (multinode-232000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:24:10.356748    5142 main.go:141] libmachine: (multinode-232000-m02) DBG | hyperkit pid from json: 4823
	I0917 02:24:10.357893    5142 status.go:330] multinode-232000-m02 host status = "Running" (err=<nil>)
	I0917 02:24:10.357903    5142 host.go:66] Checking if "multinode-232000-m02" exists ...
	I0917 02:24:10.358163    5142 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:24:10.358183    5142 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:24:10.366665    5142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53341
	I0917 02:24:10.367002    5142 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:24:10.367312    5142 main.go:141] libmachine: Using API Version  1
	I0917 02:24:10.367325    5142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:24:10.367529    5142 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:24:10.367641    5142 main.go:141] libmachine: (multinode-232000-m02) Calling .GetIP
	I0917 02:24:10.367748    5142 host.go:66] Checking if "multinode-232000-m02" exists ...
	I0917 02:24:10.367996    5142 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:24:10.368016    5142 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:24:10.376350    5142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53343
	I0917 02:24:10.376691    5142 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:24:10.377034    5142 main.go:141] libmachine: Using API Version  1
	I0917 02:24:10.377048    5142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:24:10.377254    5142 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:24:10.377376    5142 main.go:141] libmachine: (multinode-232000-m02) Calling .DriverName
	I0917 02:24:10.377508    5142 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 02:24:10.377521    5142 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHHostname
	I0917 02:24:10.377611    5142 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHPort
	I0917 02:24:10.377704    5142 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHKeyPath
	I0917 02:24:10.377798    5142 main.go:141] libmachine: (multinode-232000-m02) Calling .GetSSHUsername
	I0917 02:24:10.377881    5142 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1025/.minikube/machines/multinode-232000-m02/id_rsa Username:docker}
	I0917 02:24:10.406901    5142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:24:10.418238    5142 status.go:257] multinode-232000-m02 status: &{Name:multinode-232000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0917 02:24:10.418278    5142 status.go:255] checking status of multinode-232000-m03 ...
	I0917 02:24:10.418608    5142 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:24:10.418637    5142 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:24:10.427365    5142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53346
	I0917 02:24:10.427722    5142 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:24:10.428071    5142 main.go:141] libmachine: Using API Version  1
	I0917 02:24:10.428093    5142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:24:10.428276    5142 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:24:10.428377    5142 main.go:141] libmachine: (multinode-232000-m03) Calling .GetState
	I0917 02:24:10.428450    5142 main.go:141] libmachine: (multinode-232000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:24:10.428528    5142 main.go:141] libmachine: (multinode-232000-m03) DBG | hyperkit pid from json: 4915
	I0917 02:24:10.429666    5142 main.go:141] libmachine: (multinode-232000-m03) DBG | hyperkit pid 4915 missing from process table
	I0917 02:24:10.429715    5142 status.go:330] multinode-232000-m03 host status = "Stopped" (err=<nil>)
	I0917 02:24:10.429725    5142 status.go:343] host is not running, skipping remaining checks
	I0917 02:24:10.429731    5142 status.go:257] multinode-232000-m03 status: &{Name:multinode-232000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.83s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-232000 node start m03 -v=7 --alsologtostderr: (41.235473678s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (11.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-232000 node delete m03: (10.782451076s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (11.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-232000 stop: (16.641129652s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-232000 status: exit status 7 (80.735704ms)

                                                
                                                
-- stdout --
	multinode-232000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-232000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-232000 status --alsologtostderr: exit status 7 (79.090279ms)

                                                
                                                
-- stdout --
	multinode-232000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-232000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:28:43.822340    5399 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:28:43.822593    5399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:28:43.822598    5399 out.go:358] Setting ErrFile to fd 2...
	I0917 02:28:43.822602    5399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:28:43.822790    5399 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1025/.minikube/bin
	I0917 02:28:43.822972    5399 out.go:352] Setting JSON to false
	I0917 02:28:43.822994    5399 mustload.go:65] Loading cluster: multinode-232000
	I0917 02:28:43.823032    5399 notify.go:220] Checking for updates...
	I0917 02:28:43.823342    5399 config.go:182] Loaded profile config "multinode-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:28:43.823358    5399 status.go:255] checking status of multinode-232000 ...
	I0917 02:28:43.823774    5399 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:28:43.823825    5399 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:28:43.832491    5399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53586
	I0917 02:28:43.832815    5399 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:28:43.833212    5399 main.go:141] libmachine: Using API Version  1
	I0917 02:28:43.833221    5399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:28:43.833452    5399 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:28:43.833569    5399 main.go:141] libmachine: (multinode-232000) Calling .GetState
	I0917 02:28:43.833676    5399 main.go:141] libmachine: (multinode-232000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:28:43.833723    5399 main.go:141] libmachine: (multinode-232000) DBG | hyperkit pid from json: 5233
	I0917 02:28:43.834641    5399 main.go:141] libmachine: (multinode-232000) DBG | hyperkit pid 5233 missing from process table
	I0917 02:28:43.834664    5399 status.go:330] multinode-232000 host status = "Stopped" (err=<nil>)
	I0917 02:28:43.834673    5399 status.go:343] host is not running, skipping remaining checks
	I0917 02:28:43.834678    5399 status.go:257] multinode-232000 status: &{Name:multinode-232000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 02:28:43.834699    5399 status.go:255] checking status of multinode-232000-m02 ...
	I0917 02:28:43.834958    5399 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0917 02:28:43.834987    5399 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0917 02:28:43.843348    5399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53588
	I0917 02:28:43.843692    5399 main.go:141] libmachine: () Calling .GetVersion
	I0917 02:28:43.844024    5399 main.go:141] libmachine: Using API Version  1
	I0917 02:28:43.844034    5399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 02:28:43.844240    5399 main.go:141] libmachine: () Calling .GetMachineName
	I0917 02:28:43.844375    5399 main.go:141] libmachine: (multinode-232000-m02) Calling .GetState
	I0917 02:28:43.844465    5399 main.go:141] libmachine: (multinode-232000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0917 02:28:43.844534    5399 main.go:141] libmachine: (multinode-232000-m02) DBG | hyperkit pid from json: 5269
	I0917 02:28:43.845433    5399 main.go:141] libmachine: (multinode-232000-m02) DBG | hyperkit pid 5269 missing from process table
	I0917 02:28:43.845453    5399 status.go:330] multinode-232000-m02 host status = "Stopped" (err=<nil>)
	I0917 02:28:43.845464    5399 status.go:343] host is not running, skipping remaining checks
	I0917 02:28:43.845471    5399 status.go:257] multinode-232000-m02 status: &{Name:multinode-232000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (107.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-232000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0917 02:28:59.141542    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:29:38.885627    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-232000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m47.365273932s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-232000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (107.72s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-232000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-232000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-232000-m02 --driver=hyperkit : exit status 14 (582.066651ms)

                                                
                                                
-- stdout --
	* [multinode-232000-m02] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-232000-m02' is duplicated with machine name 'multinode-232000-m02' in profile 'multinode-232000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-232000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-232000-m03 --driver=hyperkit : (37.756476087s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-232000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-232000: exit status 80 (291.637885ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-232000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-232000-m03 already exists in multinode-232000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-232000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-232000-m03: (3.408866827s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.10s)

                                                
                                    
x
+
TestPreload (150.53s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-083000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E0917 02:31:35.809650    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-083000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m24.879649019s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-083000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-083000 image pull gcr.io/k8s-minikube/busybox: (1.740213459s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-083000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-083000: (8.380057388s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-083000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-083000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (50.133088615s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-083000 image list
helpers_test.go:175: Cleaning up "test-preload-083000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-083000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-083000: (5.239458011s)
--- PASS: TestPreload (150.53s)

                                                
                                    
x
+
TestSkaffold (114.76s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3736813887 version
skaffold_test.go:59: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3736813887 version: (1.821639403s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-208000 --memory=2600 --driver=hyperkit 
E0917 02:36:35.848785    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-208000 --memory=2600 --driver=hyperkit : (37.160983472s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3736813887 run --minikube-profile skaffold-208000 --kube-context skaffold-208000 --status-check=true --port-forward=false --interactive=false
E0917 02:37:02.252511    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3736813887 run --minikube-profile skaffold-208000 --kube-context skaffold-208000 --status-check=true --port-forward=false --interactive=false: (57.38497634s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7d9bc64844-bll4v" [646923fb-87dd-4abc-bd66-6ad6dfd9f4f6] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.005777046s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-df4f8dff7-vkqnq" [11d73f9a-0123-4872-8a76-f2b9d3b4019d] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004345505s
helpers_test.go:175: Cleaning up "skaffold-208000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-208000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-208000: (5.251586338s)
--- PASS: TestSkaffold (114.76s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (104.64s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.3316773914 start -p running-upgrade-912000 --memory=2200 --vm-driver=hyperkit 
E0917 02:51:35.882348    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.3316773914 start -p running-upgrade-912000 --memory=2200 --vm-driver=hyperkit : (1m1.194966261s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-912000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-912000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (36.626794251s)
helpers_test.go:175: Cleaning up "running-upgrade-912000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-912000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-912000: (5.246548311s)
--- PASS: TestRunningBinaryUpgrade (104.64s)

                                                
                                    
x
+
TestKubernetesUpgrade (1326.05s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-006000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-006000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (51.480144924s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-006000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-006000: (2.375166097s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-006000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-006000 status --format={{.Host}}: exit status 7 (67.149017ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-006000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit 
E0917 02:53:42.288791    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:53:59.214491    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:56:35.882591    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:57:58.372429    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:58:59.217500    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:59:21.454179    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 03:01:35.885087    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 03:02:58.373445    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 03:02:58.966073    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 03:03:59.216907    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-006000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit : (10m43.505634618s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-006000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-006000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-006000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (599.316624ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-006000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-006000
	    minikube start -p kubernetes-upgrade-006000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0060002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-006000 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-006000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit 
E0917 03:06:35.968706    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 03:07:58.461262    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 03:08:59.305671    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 03:10:22.383121    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
E0917 03:11:35.974099    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
E0917 03:12:58.464734    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
E0917 03:13:59.310313    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/functional-965000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-006000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit : (10m22.708896441s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-006000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-006000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-006000: (5.26863561s)
--- PASS: TestKubernetesUpgrade (1326.05s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.17s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=19648
- KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2617157680/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2617157680/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2617157680/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2617157680/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.17s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.92s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=19648
- KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current879456507/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current879456507/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current879456507/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current879456507/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (119.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.257024257 start -p stopped-upgrade-794000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.257024257 start -p stopped-upgrade-794000 --memory=2200 --vm-driver=hyperkit : (38.574169381s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.257024257 -p stopped-upgrade-794000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.257024257 -p stopped-upgrade-794000 stop: (8.244094896s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-794000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0917 03:16:01.550654    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-794000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m13.129733299s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (119.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-794000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-794000: (2.492310069s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-118000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-118000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (474.742033ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-118000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1025/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1025/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (74.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-118000 --driver=hyperkit 
E0917 03:16:35.979687    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/addons-190000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-118000 --driver=hyperkit : (1m14.136255418s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-118000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (74.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-118000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-118000 --no-kubernetes --driver=hyperkit : (6.186925963s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-118000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-118000 status -o json: exit status 2 (154.148861ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-118000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-118000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-118000: (2.392640334s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (18.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-118000 --no-kubernetes --driver=hyperkit 
E0917 03:17:58.468985    1560 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1025/.minikube/profiles/skaffold-208000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-118000 --no-kubernetes --driver=hyperkit : (18.78376951s)
--- PASS: TestNoKubernetes/serial/Start (18.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-118000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-118000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (125.93393ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-118000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-118000: (2.371966187s)
--- PASS: TestNoKubernetes/serial/Stop (2.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-118000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-118000 --driver=hyperkit : (19.430842236s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-118000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-118000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (156.732091ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.16s)

                                                
                                    

Test skip (18/219)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard